Feb 02 10:38:39 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 10:38:39 crc restorecon[4664]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.589393 4782 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594806 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594825 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594830 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594835 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594839 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594844 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594848 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594854 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594860 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594865 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594875 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594883 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594888 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594894 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594898 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594901 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594905 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594908 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594912 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594917 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594922 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594926 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594930 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594935 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594939 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594951 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594956 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594961 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594965 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594969 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594973 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594977 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594981 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594985 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594989 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594993 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594996 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595001 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595006 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595010 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595014 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595017 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595021 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595025 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595028 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595032 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595037 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595044 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595052 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595058 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595064 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595069 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595074 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595079 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595083 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595088 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595092 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595097 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595101 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595104 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595108 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595111 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595115 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595118 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595123 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595128 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595132 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595135 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595139 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595142 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595146 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595238 4782 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595247 4782 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595257 4782 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595263 4782 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595269 4782 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595273 4782 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595280 4782 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595286 4782 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595291 4782 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595295 4782 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595300 4782 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595305 4782 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595309 4782 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595313 4782 flags.go:64] FLAG: --cgroup-root="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595317 4782 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595322 4782 flags.go:64] FLAG: --client-ca-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595326 4782 flags.go:64] FLAG: --cloud-config="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595330 4782 flags.go:64] FLAG: --cloud-provider="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595334 4782 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595340 4782 flags.go:64] FLAG: --cluster-domain="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595344 4782 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595348 4782 flags.go:64] FLAG: --config-dir="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595352 4782 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595358 4782 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595364 4782 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595369 4782 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595376 4782 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595381 4782 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595385 4782 flags.go:64] FLAG: --contention-profiling="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595389 4782 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595394 4782 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595399 4782 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595404 4782 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595409 4782 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595414 4782 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595418 4782 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595424 4782 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595429 4782 flags.go:64] FLAG: --enable-server="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595434 4782 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595441 4782 flags.go:64] FLAG: --event-burst="100" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595445 4782 flags.go:64] FLAG: --event-qps="50" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595450 4782 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595455 4782 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595461 4782 flags.go:64] FLAG: --eviction-hard="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595472 4782 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595477 4782 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595482 4782 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595486 4782 flags.go:64] FLAG: --eviction-soft="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595491 4782 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595495 4782 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595500 4782 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595505 4782 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595509 4782 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595513 4782 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595518 4782 flags.go:64] FLAG: --feature-gates="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595524 4782 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595528 4782 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595533 4782 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595538 4782 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595543 4782 flags.go:64] FLAG: --healthz-port="10248" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595548 4782 flags.go:64] FLAG: --help="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595552 4782 flags.go:64] FLAG: --hostname-override="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595557 4782 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595564 4782 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595569 4782 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595574 4782 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595579 4782 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595583 4782 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595588 4782 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595592 4782 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595596 4782 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595601 4782 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595606 4782 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595610 4782 flags.go:64] FLAG: --kube-reserved="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595614 4782 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595619 4782 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595623 4782 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595627 4782 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595632 4782 flags.go:64] FLAG: --lock-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595653 4782 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595658 4782 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595663 4782 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595670 4782 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595676 4782 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595682 4782 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595695 4782 flags.go:64] FLAG: --logging-format="text" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595702 4782 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595708 4782 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595714 4782 flags.go:64] FLAG: --manifest-url="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595719 4782 flags.go:64] FLAG: --manifest-url-header="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595727 4782 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595732 4782 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595739 4782 flags.go:64] FLAG: --max-pods="110" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595745 4782 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595750 4782 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595756 4782 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595761 4782 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595767 4782 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595772 4782 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595777 4782 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595788 4782 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595792 4782 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595797 4782 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595802 4782 flags.go:64] FLAG: --pod-cidr="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595806 4782 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595813 4782 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595818 4782 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595822 4782 flags.go:64] FLAG: --pods-per-core="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595827 4782 flags.go:64] FLAG: --port="10250" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595831 4782 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595836 4782 flags.go:64] FLAG: --provider-id="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595840 4782 flags.go:64] FLAG: --qos-reserved="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595844 4782 flags.go:64] FLAG: --read-only-port="10255" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595849 4782 flags.go:64] FLAG: --register-node="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595853 4782 flags.go:64] FLAG: --register-schedulable="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595860 4782 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595877 4782 flags.go:64] FLAG: --registry-burst="10" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595884 4782 flags.go:64] FLAG: --registry-qps="5" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595891 4782 flags.go:64] FLAG: --reserved-cpus="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595896 4782 flags.go:64] FLAG: --reserved-memory="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595904 4782 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595908 4782 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595913 4782 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595917 4782 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595923 4782 flags.go:64] FLAG: --runonce="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595927 4782 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595932 4782 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595937 4782 flags.go:64] FLAG: --seccomp-default="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595941 4782 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595946 4782 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595951 4782 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595956 4782 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595961 4782 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595965 4782 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595969 4782 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595974 4782 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595978 4782 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595983 4782 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595987 4782 flags.go:64] FLAG: --system-cgroups="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595991 4782 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595998 4782 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596003 4782 flags.go:64] FLAG: --tls-cert-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596007 4782 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596015 4782 flags.go:64] FLAG: --tls-min-version="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596019 4782 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596023 4782 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596027 4782 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596032 4782 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596037 4782 flags.go:64] FLAG: --v="2" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596044 4782 flags.go:64] FLAG: --version="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596050 4782 flags.go:64] FLAG: --vmodule="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596057 4782 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596061 4782 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596162 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596166 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596171 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596175 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596179 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596183 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596186 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596190 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596193 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596197 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596201 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596204 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596208 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596212 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596216 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596219 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596223 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596226 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596230 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596233 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596237 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596240 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596244 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596247 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596251 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596255 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596258 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596262 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596267 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596272 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596276 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596280 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596285 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596290 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596295 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596299 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596303 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596307 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596311 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596315 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596319 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596322 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596326 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596329 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596333 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596337 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596340 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596343 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596347 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596351 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596355 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596359 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596363 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596366 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596370 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596374 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596378 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596382 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596386 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596389 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596402 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596407 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596411 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596415 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596420 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596424 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596428 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596431 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596435 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596439 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596444 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596451 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.608414 4782 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.608459 4782 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608528 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608536 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608542 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608547 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608550 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608557 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608561 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608566 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608570 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608574 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608578 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608583 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608589 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608594 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608598 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608602 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608606 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608612 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608618 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608624 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608631 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608651 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608657 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608662 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608666 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608670 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608674 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608678 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608682 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608687 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608692 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608697 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608701 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608707 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608713 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608717 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608721 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608724 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608728 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608732 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608736 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608740 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608743 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608749 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608753 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608757 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608762 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608766 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608770 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608775 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608779 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608783 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608788 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608792 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608795 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608800 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608803 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608807 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608811 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608815 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608819 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608823 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608827 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608831 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608835 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608841 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608845 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608849 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608853 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608857 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608862 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.608869 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608997 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609005 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609012 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609018 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609025 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609030 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609035 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609039 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609045 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609050 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609055 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609059 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609063 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609068 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609073 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609078 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609082 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609087 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609091 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609095 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609099 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609103 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609107 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609111 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609114 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609119 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609124 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609128 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609131 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609136 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609139 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609143 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609147 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609151 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609156 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609160 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609165 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609169 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609173 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609177 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609180 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609184 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609188 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609192 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609195 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609200 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609203 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609207 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609211 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609215 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609220 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609225 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609230 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609234 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609237 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609241 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609245 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609250 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609254 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609258 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609261 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609265 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609269 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609273 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609276 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609280 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609285 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609290 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609294 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609299 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609305 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.609313 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.609545 4782 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.614053 4782 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.614217 4782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.616195 4782 server.go:997] "Starting client certificate rotation" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.616224 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.617397 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 16:00:12.029869556 +0000 UTC Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.617554 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.646959 4782 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.650168 4782 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.650917 4782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.666571 4782 log.go:25] "Validated CRI v1 runtime API" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.703533 4782 log.go:25] "Validated CRI v1 image API" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.705629 4782 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.711496 4782 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-10-32-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.711545 4782 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724131 4782 manager.go:217] Machine: {Timestamp:2026-02-02 10:38:40.721559158 +0000 UTC m=+0.605751884 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b85e9547-662e-4455-bbaa-2d2f2aaad904 BootID:9f06aea5-54f4-4b11-8fec-22fbe76ec89b Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9d:ee:be Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9d:ee:be Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bb:86:68 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d0:22:e2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f7:8c:d8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:bf:a9:f8 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:af:55:de Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:11:6b:87:92:1a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:9a:8d:f2:ed:24 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724330 4782 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724555 4782 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724903 4782 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725121 4782 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725167 4782 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725361 4782 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725371 4782 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725897 4782 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725927 4782 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.726201 4782 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.726291 4782 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730299 4782 kubelet.go:418] "Attempting to sync node with API server" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730332 4782 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730360 4782 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730374 4782 kubelet.go:324] "Adding apiserver pod source" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730388 4782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.735458 4782 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.736408 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.736413 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.736505 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.736524 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.737447 4782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.742263 4782 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744145 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744190 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744207 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744221 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744244 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744259 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744276 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744299 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744318 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744332 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744354 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744368 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.746755 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.747952 4782 server.go:1280] "Started kubelet" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.748421 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.750259 4782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.750252 4782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 10:38:40 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.751102 4782 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.753154 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.753398 4782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.759375 4782 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.759401 4782 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.759592 4782 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.759896 4782 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.760751 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:05:35.587244519 +0000 UTC Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.759165 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189067be566ae12c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:38:40.747897132 +0000 UTC m=+0.632089888,LastTimestamp:2026-02-02 10:38:40.747897132 +0000 UTC m=+0.632089888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.760787 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.762583 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.762766 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.768190 4782 server.go:460] "Adding debug handlers to kubelet server" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.769539 4782 factory.go:55] Registering systemd factory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.769590 4782 factory.go:221] Registration of the systemd container factory successfully Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772084 4782 factory.go:153] Registering CRI-O factory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772216 4782 factory.go:221] Registration of the crio container factory successfully Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772505 4782 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772607 4782 factory.go:103] Registering Raw factory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772703 4782 manager.go:1196] Started watching for new ooms in manager Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773787 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773850 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773866 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773881 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773893 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773930 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773945 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773958 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773975 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773990 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774004 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774018 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774032 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774093 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774107 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774120 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774167 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774182 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774194 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774207 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774226 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774240 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774254 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774269 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774284 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774301 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774323 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774341 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774361 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774377 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774394 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774445 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774463 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774481 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774499 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774514 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774531 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774545 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774560 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774575 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774589 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774603 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774618 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774633 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774669 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774687 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774701 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774715 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774731 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774745 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774761 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774773 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774795 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774814 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774829 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774844 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774859 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774880 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774894 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774909 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774923 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774939 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774954 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774960 4782 manager.go:319] Starting recovery of all containers Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774969 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.777496 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.778456 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.779508 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.779550 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.779572 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782559 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782648 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782729 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782830 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782897 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782968 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783055 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783160 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783228 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783287 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783352 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783416 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783481 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783545 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783609 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783699 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783762 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783833 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783900 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784014 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784095 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784156 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784227 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784287 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784342 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784398 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784455 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784513 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784581 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784652 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784716 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784777 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784833 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784898 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784958 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785058 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785126 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785185 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785249 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785307 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785364 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785422 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785477 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785536 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785612 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785686 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785744 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785800 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785856 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785926 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785988 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786042 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786098 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786179 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786245 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786303 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786358 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786414 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786471 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786534 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786602 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786727 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786802 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786871 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786932 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791303 4782 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791391 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791425 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791451 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791474 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791498 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791525 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791551 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791575 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791599 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791626 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791687 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791718 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791766 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791798 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791831 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791861 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791889 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791915 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791945 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791974 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792004 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792033 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792064 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792094 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792125 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792382 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792434 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792458 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792480 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792502 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792525 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792548 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792569 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792590 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792612 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792632 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792691 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792744 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792770 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792798 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792827 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792864 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792890 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792920 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792950 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792979 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793011 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793041 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793072 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793102 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793131 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793162 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793191 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793217 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793246 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793273 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793302 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793332 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793365 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793396 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793429 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793458 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793489 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793523 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793554 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793585 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793631 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793695 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793729 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793758 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793788 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793822 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793856 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793899 4782 reconstruct.go:97] "Volume reconstruction finished" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793918 4782 reconciler.go:26] "Reconciler: start to sync state" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.799960 4782 manager.go:324] Recovery completed Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.810568 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.815224 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.815267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.815288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.817481 4782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.819500 4782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.819615 4782 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.819734 4782 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.819980 4782 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.822076 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.822324 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.824337 4782 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.824435 4782 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.824515 4782 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.844727 4782 policy_none.go:49] "None policy: Start" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.846176 4782 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.846217 4782 state_mem.go:35] "Initializing new in-memory state store" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.860439 4782 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.894803 4782 manager.go:334] "Starting Device Plugin manager" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.894859 4782 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.894870 4782 server.go:79] "Starting device plugin registration server" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895343 4782 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895358 4782 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895751 4782 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895832 4782 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895851 4782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.905354 4782 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.920752 4782 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.920853 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.922900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.922935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.922947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923050 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923468 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923553 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924089 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924272 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924332 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924822 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925283 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925322 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925562 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925589 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925841 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925953 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926582 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926611 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.927403 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.927424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.927432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.961487 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997359 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997495 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997516 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997538 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997559 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997579 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997601 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997759 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997814 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997855 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997901 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997938 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.998005 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000632 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000683 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.001123 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098945 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098981 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099018 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099033 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099049 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099078 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099093 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099109 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099118 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099163 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099201 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099149 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099232 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099241 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099279 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099189 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099196 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099210 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099345 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099347 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099355 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099130 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.202032 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203760 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.204069 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.254002 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.260545 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.281684 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.307984 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.313519 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.314672 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2 WatchSource:0}: Error finding container d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2: Status 404 returned error can't find the container with id d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2 Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.315130 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de WatchSource:0}: Error finding container d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de: Status 404 returned error can't find the container with id d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.320001 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661 WatchSource:0}: Error finding container a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661: Status 404 returned error can't find the container with id a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661 Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.323903 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f WatchSource:0}: Error finding container 903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f: Status 404 returned error can't find the container with id 903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.330374 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5 WatchSource:0}: Error finding container 13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5: Status 404 returned error can't find the container with id 13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5 Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.362955 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.588324 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.588420 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.604705 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606221 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.606518 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.750623 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.761681 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:29:57.806259927 +0000 UTC Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.824581 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.825382 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.826081 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.826754 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.827361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f"} Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.853574 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.853873 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: W0202 10:38:42.153432 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.153563 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.163581 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 02 10:38:42 crc kubenswrapper[4782]: W0202 10:38:42.210414 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.210523 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.406954 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408194 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408285 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.408821 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.717971 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.718959 4782 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.749879 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.761980 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:48:40.389130723 +0000 UTC Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833099 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833131 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833151 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.834024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.834062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.834075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.835459 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.835536 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.835578 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.838692 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.838731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.838741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.839719 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.839793 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.839845 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840082 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840787 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840820 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.841416 4782 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.841541 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.841556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.842673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.842695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.842704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.844605 4782 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.844652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.844743 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.845410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.845433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.845441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: W0202 10:38:43.228453 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:43 crc kubenswrapper[4782]: E0202 10:38:43.228518 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.375321 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.400602 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.409297 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:43 crc kubenswrapper[4782]: W0202 10:38:43.734099 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:43 crc kubenswrapper[4782]: E0202 10:38:43.735167 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.748980 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.762318 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:22:58.686815138 +0000 UTC Feb 02 10:38:43 crc kubenswrapper[4782]: E0202 10:38:43.764963 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.850326 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c" exitCode=0 Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.850436 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.850445 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.851918 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.851961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.851977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.854191 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.854193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.855265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.855310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.855322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.862955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.862996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863008 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863016 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865925 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865970 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866029 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.009549 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010924 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010987 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:44 crc kubenswrapper[4782]: E0202 10:38:44.011431 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:44 crc kubenswrapper[4782]: W0202 10:38:44.092238 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:44 crc kubenswrapper[4782]: E0202 10:38:44.092318 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.748948 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.762721 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:06:12.892004579 +0000 UTC Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.766120 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.871564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a"} Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.871707 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.872495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.872518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.872526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.874329 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920" exitCode=0 Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.874419 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.874827 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875088 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920"} Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875140 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875447 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875755 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879376 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880328 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: W0202 10:38:44.945078 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:44 crc kubenswrapper[4782]: E0202 10:38:44.945144 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.762897 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:48:06.159362388 +0000 UTC Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.878496 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.882005 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a" exitCode=255 Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.882093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.882263 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883943 4782 scope.go:117] "RemoveContainer" containerID="1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893165 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893238 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893349 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893242 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893772 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893803 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.894424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.894458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.894470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.895210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.895247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.895262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.016528 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.763605 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:51:57.560792971 +0000 UTC Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.811832 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.898147 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.899924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057"} Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.900018 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.900061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.901412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.901442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.901452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.904167 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54"} Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.904312 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.905231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.905348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.905428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.211623 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213426 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213507 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.763790 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:04:58.144457459 +0000 UTC Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.767070 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.767177 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.907293 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.907354 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.907972 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908933 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.926903 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.764852 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:43:09.639102995 +0000 UTC Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.909848 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.909971 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.911688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.911821 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.911911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.912159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.912268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.912362 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.190880 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.579342 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.579568 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.580748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.580810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.580824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.766454 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:25:44.861442269 +0000 UTC Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.912186 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.913216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.913247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.913256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:50 crc kubenswrapper[4782]: I0202 10:38:50.767041 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:21:21.561627553 +0000 UTC Feb 02 10:38:50 crc kubenswrapper[4782]: E0202 10:38:50.905512 4782 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:38:51 crc kubenswrapper[4782]: I0202 10:38:51.767835 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:22:16.645639333 +0000 UTC Feb 02 10:38:52 crc kubenswrapper[4782]: I0202 10:38:52.768122 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:17:40.066838006 +0000 UTC Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.405481 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.405678 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.406982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.407021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.407037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.768940 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:12:15.61346427 +0000 UTC Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.195995 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.196299 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.197903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.197969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.197986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.770006 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:27:22.861592852 +0000 UTC Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.239764 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.239847 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.252625 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.252736 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.770418 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:13:45.874839686 +0000 UTC Feb 02 10:38:56 crc kubenswrapper[4782]: I0202 10:38:56.770610 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:03:55.310696848 +0000 UTC Feb 02 10:38:57 crc kubenswrapper[4782]: I0202 10:38:57.767047 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:38:57 crc kubenswrapper[4782]: I0202 10:38:57.767520 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:38:57 crc kubenswrapper[4782]: I0202 10:38:57.771527 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:46:22.026396344 +0000 UTC Feb 02 10:38:58 crc kubenswrapper[4782]: I0202 10:38:58.772251 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:09:00.918501712 +0000 UTC Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.196928 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.197223 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.197958 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198041 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.202346 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.247105 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.247167 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.772711 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:25:32.747483782 +0000 UTC Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.937447 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.937832 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.937892 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.938610 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.938703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.938725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.251696 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.257046 4782 trace.go:236] Trace[166587467]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:38:49.955) (total time: 10301ms): Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[166587467]: ---"Objects listed" error: 10301ms (10:39:00.256) Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[166587467]: [10.301939594s] [10.301939594s] END Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.257080 4782 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.259427 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259464 4782 trace.go:236] Trace[2122569030]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:38:47.255) (total time: 13003ms): Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[2122569030]: ---"Objects listed" error: 13003ms (10:39:00.259) Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[2122569030]: [13.003595325s] [13.003595325s] END Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259493 4782 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259498 4782 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259579 4782 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.261450 4782 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.265791 4782 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.742524 4782 apiserver.go:52] "Watching apiserver" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.772569 4782 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.772847 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:45:51.480582118 +0000 UTC Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.772987 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773404 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773437 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.773498 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.773862 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773957 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.774024 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773960 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.774162 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.778851 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.778909 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.778991 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.779225 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.779610 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.779969 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.780911 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.782770 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.785992 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.806907 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.834796 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.846817 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.861102 4782 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863459 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863483 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863504 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863580 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863654 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863682 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863721 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863758 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863782 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863827 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863847 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863870 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863918 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863939 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863962 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863993 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864013 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864033 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864057 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864079 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864130 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864193 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864217 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864238 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864258 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864280 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864326 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864400 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864435 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864465 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864497 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864529 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864549 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864572 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864592 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864658 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864683 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864716 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864751 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864775 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864800 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864863 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864907 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864930 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864952 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864979 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865001 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865047 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865071 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865091 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865110 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865132 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865155 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865178 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865220 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865260 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865283 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865305 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865328 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865355 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865377 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865397 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865418 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865445 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865477 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865510 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865536 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865590 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865613 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865660 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865686 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865713 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865738 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865768 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865832 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865856 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865910 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865931 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865959 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865984 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866004 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866027 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866049 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866141 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866165 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866192 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866216 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866242 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866264 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866289 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866311 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866333 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866355 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866383 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866417 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866443 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866471 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866498 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866524 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866548 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866633 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866676 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866701 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866722 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866746 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866770 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866795 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866839 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866863 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866908 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866931 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866978 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867001 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867028 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867051 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867077 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867103 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867129 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867163 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867188 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867213 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867261 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867309 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867362 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867388 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867415 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867463 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867492 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867559 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867581 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867607 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867632 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867679 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867704 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867728 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867753 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867780 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867824 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867846 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867863 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867880 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867899 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867932 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867952 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867989 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868006 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868025 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868043 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868060 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868079 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868118 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868160 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868179 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868200 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868219 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868256 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868274 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868297 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868323 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868352 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868377 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868402 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868523 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868542 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868559 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868577 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868595 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868696 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868720 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868741 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868762 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868791 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868815 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868858 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868875 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868942 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868992 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869786 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863771 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864022 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864059 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864248 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864548 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864775 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865020 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865125 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865314 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865497 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865593 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865731 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865850 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866199 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866257 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866377 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870036 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866391 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866409 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866575 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866797 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866993 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868941 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869041 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869260 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869439 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869454 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869625 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869850 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869847 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870587 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870785 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870933 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.871097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874556 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874608 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874953 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874967 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875211 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875267 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875296 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875725 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875951 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876015 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876055 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876342 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876678 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876710 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876747 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.877566 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878402 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878509 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878569 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878592 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878585 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878853 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878891 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878976 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.879630 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.879659 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.879771 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880244 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880326 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880379 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880389 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880692 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881211 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881324 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881674 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881686 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882075 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882077 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882129 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882554 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882963 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883216 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883245 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883282 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883507 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883609 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883827 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884073 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884288 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884446 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884496 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884504 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884727 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.885037 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887172 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887610 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887696 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887834 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887899 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888202 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888322 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888537 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888715 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888717 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888809 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888948 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889228 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889436 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889627 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889727 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889759 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889756 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889787 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889888 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889982 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890089 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890115 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890299 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890381 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890562 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890592 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890623 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890938 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891132 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891260 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891377 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891426 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891432 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891569 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891795 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891815 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891862 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892107 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892168 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892003 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892285 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892364 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892594 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892791 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.892962 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.392934729 +0000 UTC m=+21.277127615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893074 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893388 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893495 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893504 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893855 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894050 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894160 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894273 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894442 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894612 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894622 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894796 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895043 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895071 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895131 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895260 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895333 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895591 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895721 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895797 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895863 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895962 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896252 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896681 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896914 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896975 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897371 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897494 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897574 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897724 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897773 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897933 4782 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.947454 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.947896 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.950128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.951474 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897954 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.898135 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.898217 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.898419 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.898557 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.898920 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946009 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946136 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946321 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946445 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946598 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.947086 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.955527 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.955592 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.955617 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.955721 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.455700362 +0000 UTC m=+21.339893078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.956204 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.961698 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.461662458 +0000 UTC m=+21.345855244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.968552 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.968619 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.968662 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.970835 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.971805 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.973442 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.46890273 +0000 UTC m=+21.353095446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.973494 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.473474018 +0000 UTC m=+21.357666734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973611 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973660 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973747 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973773 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973788 4782 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973800 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973814 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973826 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973840 4782 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973853 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973865 4782 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973882 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973903 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973925 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973940 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973952 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973963 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973974 4782 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973985 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973997 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974009 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974019 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974030 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974043 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974053 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974065 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974077 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974088 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974100 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974112 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974124 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974136 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974147 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974160 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974171 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974182 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974195 4782 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974206 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974218 4782 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974229 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974240 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974250 4782 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974261 4782 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974274 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974286 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974299 4782 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974312 4782 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974322 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974332 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974344 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974356 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974367 4782 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974377 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974389 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974400 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974410 4782 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974429 4782 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974442 4782 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974455 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974466 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974477 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974489 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974500 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974512 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974523 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974534 4782 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974544 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974556 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974568 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974580 4782 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974594 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974606 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974616 4782 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974627 4782 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974726 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.977001 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.977554 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.975113 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981735 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981801 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981818 4782 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981834 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981847 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981882 4782 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981896 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981909 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981922 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981933 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981971 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981983 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981996 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982010 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982044 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982057 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982069 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982080 4782 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982094 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982127 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982138 4782 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982150 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982162 4782 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982174 4782 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982205 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982217 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982228 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982240 4782 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982251 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982284 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982296 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982307 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982322 4782 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982355 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982394 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982434 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982448 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982460 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982474 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982514 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982528 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982542 4782 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982554 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982569 4782 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982600 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982613 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982625 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982662 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982677 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982690 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982701 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982710 4782 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982737 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982748 4782 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982757 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982768 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982779 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982835 4782 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982851 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982865 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982899 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982916 4782 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982928 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982941 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982954 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982990 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983002 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983015 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983027 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983059 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983072 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983085 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983097 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983110 4782 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983144 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983159 4782 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983173 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983185 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983219 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983234 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983247 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983260 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983271 4782 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983305 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983319 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983333 4782 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983346 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983359 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983372 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983409 4782 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983422 4782 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983436 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983448 4782 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983460 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983472 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983484 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983497 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983510 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983544 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983560 4782 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983577 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983589 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983607 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983619 4782 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983631 4782 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983668 4782 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983681 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983693 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983706 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983717 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983729 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983740 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983777 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983788 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983801 4782 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983813 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983824 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983837 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.991840 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.994992 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.996621 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.004301 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.006347 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.011086 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.019433 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.033857 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.046717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.061806 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.073979 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.085102 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.085207 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.085228 4782 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.091361 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.102773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.105035 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.114760 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.125780 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.488957 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489075 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489054024 +0000 UTC m=+22.373246760 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489107 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489122 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489167 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489155957 +0000 UTC m=+22.373348673 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489187 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489228 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489261 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.48925163 +0000 UTC m=+22.373444346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489272 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489287 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489301 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489317 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489330 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489321232 +0000 UTC m=+22.373513948 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489330 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489345 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489384 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489374493 +0000 UTC m=+22.373567209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.773005 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:44:53.047483671 +0000 UTC Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.820867 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.821016 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.978866 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.979852 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.982714 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" exitCode=255 Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.982770 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.982845 4782 scope.go:117] "RemoveContainer" containerID="1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.984994 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"35e06303ce818ef4b29ebc14c989d7b71257426e71e5aca847c457d6b2a42a2e"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.987751 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.987790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.987806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9094eee33ca27e352f9ec60b2b6f21cc1c5ae940ff9c1db454e673c91ea4dea9"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.990549 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.990633 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0017c1161f0c86e08b4713c5155093f0eebe7d88e487f034f8ea0739fac9c056"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.997295 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.998401 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.998603 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.000251 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.017287 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.033514 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.050904 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.060977 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.072459 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.083824 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.103124 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.118091 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.131592 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.147780 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"message\\\":\\\"W0202 10:38:44.297012 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:38:44.297318 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770028724 cert, and key in /tmp/serving-cert-2718931427/serving-signer.crt, /tmp/serving-cert-2718931427/serving-signer.key\\\\nI0202 10:38:44.688709 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:38:44.691818 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:38:44.691936 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:38:44.692850 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2718931427/tls.crt::/tmp/serving-cert-2718931427/tls.key\\\\\\\"\\\\nF0202 10:38:44.956753 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.164431 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.178789 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.500913 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501030 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501071 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501127 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501178 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501164225 +0000 UTC m=+24.385356941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501207 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501216 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501220 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501247 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501261 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501252037 +0000 UTC m=+24.385444753 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501285 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501277428 +0000 UTC m=+24.385470134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501296 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501341 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501358 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501425 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501398831 +0000 UTC m=+24.385591597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501532 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501521225 +0000 UTC m=+24.385713941 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.773701 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:13:27.808265878 +0000 UTC Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.820949 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.821088 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.820949 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.821467 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.826411 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.827058 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.828837 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.830030 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.831314 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.832840 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.835366 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.836237 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.837909 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.838841 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.840239 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.841240 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.841885 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.842569 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.843287 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.843944 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.844608 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.845090 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.845808 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.846428 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.847209 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.847882 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.848388 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.849133 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.850809 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.851729 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.853123 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.853630 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.854304 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.855273 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.855791 4782 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.855897 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.858716 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.859660 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.860180 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.862938 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.863712 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.864333 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.865497 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.866808 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.867854 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.868892 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.869677 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.870403 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.870951 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.871604 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.872200 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.873163 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.873766 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.874393 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.875049 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.875754 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.876501 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.877116 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.994914 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.996969 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.997260 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.010460 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.025498 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.043007 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.056366 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.070206 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.083448 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.106499 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.774173 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:53:49.262639123 +0000 UTC Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.820151 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:03 crc kubenswrapper[4782]: E0202 10:39:03.820280 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.283413 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.296108 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.298840 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.309137 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.328083 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.345321 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.364333 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.381945 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.398218 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.414898 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.430597 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.444634 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.461060 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.479694 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.493384 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.508625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528723 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528754 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528807 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.528974 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.528998 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529011 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529007 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529065 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529043947 +0000 UTC m=+28.413236663 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529109 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529081828 +0000 UTC m=+28.413274594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.528974 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529147 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529166 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529234 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529224602 +0000 UTC m=+28.413417408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529330 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529361 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529352186 +0000 UTC m=+28.413545012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529488 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529476859 +0000 UTC m=+28.413669645 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.536711 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.555986 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.771749 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.774309 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:15:07.568641017 +0000 UTC Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.776851 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.787445 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.789580 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.815080 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.820205 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.820332 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.820409 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.820523 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.830453 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.845765 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.860563 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.876731 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.890754 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.907939 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.923428 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.939593 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.955037 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.969103 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.991159 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.002397 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a"} Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.009192 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: E0202 10:39:05.011111 4782 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.026806 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.041981 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.058430 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.083429 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.103264 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.118185 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.131779 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.156960 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.170284 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.183440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.217594 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.238183 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.774806 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:03:35.906462922 +0000 UTC Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.820438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:05 crc kubenswrapper[4782]: E0202 10:39:05.820628 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.660467 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.662570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.662780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.662885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.663029 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.670852 4782 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.671356 4782 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672741 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.691345 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695786 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.711779 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.715975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716071 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.731227 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.735765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.752965 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.757841 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758259 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758344 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.775117 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:14:33.405042196 +0000 UTC Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.775758 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.775992 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777387 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.820824 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.820880 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.820962 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.821326 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879598 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982491 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085951 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085999 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188890 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291471 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394084 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496708 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598833 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598942 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598979 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701886 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.775285 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:24:37.57703347 +0000 UTC Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804194 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804221 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.820428 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:07 crc kubenswrapper[4782]: E0202 10:39:07.820550 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906727 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.926558 4782 csr.go:261] certificate signing request csr-qkjks is approved, waiting to be issued Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.935370 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fptzv"] Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.935751 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.938960 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.940268 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.953991 4782 csr.go:257] certificate signing request csr-qkjks is issued Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.954770 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.954886 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.969695 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.002608 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009736 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.044602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.063740 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.089369 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6np9\" (UniqueName: \"kubernetes.io/projected/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-kube-api-access-w6np9\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.089419 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-hosts-file\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.089728 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.108796 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112375 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112407 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.141724 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.170384 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.187867 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.190308 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6np9\" (UniqueName: \"kubernetes.io/projected/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-kube-api-access-w6np9\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.190337 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-hosts-file\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.190424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-hosts-file\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.212673 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6np9\" (UniqueName: \"kubernetes.io/projected/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-kube-api-access-w6np9\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217084 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.248045 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325908 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428197 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428208 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448037 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fsqgq"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448274 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bhdgk"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448406 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448502 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.450004 4782 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.450039 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.450554 4782 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.450580 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.450934 4782 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.450962 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451258 4782 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451278 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451358 4782 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451374 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451606 4782 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451626 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451929 4782 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451954 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.452046 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.452951 4782 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.452979 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.453723 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.466860 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.481913 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.499490 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.519360 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.530562 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.530819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.530927 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.531007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.531073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.545274 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.571017 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.589205 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593280 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593334 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-netns\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593368 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cni-binary-copy\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593389 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-k8s-cni-cncf-io\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-os-release\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593450 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nrfs\" (UniqueName: \"kubernetes.io/projected/04d9744a-e730-45b4-9f0c-bbb5b02cd311-kube-api-access-7nrfs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593466 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7919e98f-cc47-4f3c-9c53-6313850ea7b8-rootfs\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7919e98f-cc47-4f3c-9c53-6313850ea7b8-proxy-tls\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593498 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-system-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593515 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cnibin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593532 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593546 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-kubelet\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593559 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-hostroot\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593574 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-conf-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593588 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-etc-kubernetes\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593622 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7919e98f-cc47-4f3c-9c53-6313850ea7b8-mcd-auth-proxy-config\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593652 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfn4s\" (UniqueName: \"kubernetes.io/projected/7919e98f-cc47-4f3c-9c53-6313850ea7b8-kube-api-access-sfn4s\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593666 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-bin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593680 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593697 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593715 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-socket-dir-parent\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593730 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-multus\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593743 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-multus-certs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593817 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.593804473 +0000 UTC m=+36.477997189 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593931 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593943 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593953 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593979 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.593973308 +0000 UTC m=+36.478166024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594214 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594245 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.594237865 +0000 UTC m=+36.478430581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594307 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594322 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594330 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594351 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.594345508 +0000 UTC m=+36.478538224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594411 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594438 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.594431081 +0000 UTC m=+36.478623797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.619282 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633110 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633169 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633722 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.651280 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.674220 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-multus\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-multus-certs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694604 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-multus\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694774 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-socket-dir-parent\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694858 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694864 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-multus-certs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694888 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-netns\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694939 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-netns\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694956 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cni-binary-copy\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694975 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694992 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-k8s-cni-cncf-io\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-k8s-cni-cncf-io\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695028 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7919e98f-cc47-4f3c-9c53-6313850ea7b8-rootfs\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695053 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7919e98f-cc47-4f3c-9c53-6313850ea7b8-rootfs\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-os-release\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nrfs\" (UniqueName: \"kubernetes.io/projected/04d9744a-e730-45b4-9f0c-bbb5b02cd311-kube-api-access-7nrfs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695106 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-os-release\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695116 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-system-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cnibin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7919e98f-cc47-4f3c-9c53-6313850ea7b8-proxy-tls\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695184 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-hostroot\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-conf-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cnibin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695219 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-etc-kubernetes\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695232 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-system-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695241 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-hostroot\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695254 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-kubelet\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-etc-kubernetes\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695268 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-conf-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695277 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfn4s\" (UniqueName: \"kubernetes.io/projected/7919e98f-cc47-4f3c-9c53-6313850ea7b8-kube-api-access-sfn4s\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-bin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695323 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695284 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-kubelet\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695355 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7919e98f-cc47-4f3c-9c53-6313850ea7b8-mcd-auth-proxy-config\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695372 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-bin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695873 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-socket-dir-parent\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.706132 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.728502 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735988 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.756212 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.776411 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:05:55.285114617 +0000 UTC Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.782701 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.820984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.821453 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.821655 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.822035 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.822091 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.837897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838221 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838379 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.867386 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.877939 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8lwfx"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.878621 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.887276 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.888145 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.889072 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.892449 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.892685 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.895382 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.896103 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.897490 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.897704 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.902012 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.910066 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.911835 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940789 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.952742 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.955834 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 10:34:07 +0000 UTC, rotation deadline is 2026-12-26 05:07:12.499272347 +0000 UTC Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.955983 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7842h28m3.543294181s for next certificate rotation Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.976384 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.993901 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.002977 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdc79\" (UniqueName: \"kubernetes.io/projected/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-kube-api-access-cdc79\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003022 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003053 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-os-release\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003079 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-system-cni-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003123 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003188 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003246 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003313 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003370 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003410 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003445 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003470 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003491 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003536 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003627 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003671 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003716 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003753 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cnibin\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003810 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.012161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fptzv" event={"ID":"fa0a3c57-fe47-43dd-8905-00df4cae4fb8","Type":"ContainerStarted","Data":"fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.012211 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fptzv" event={"ID":"fa0a3c57-fe47-43dd-8905-00df4cae4fb8","Type":"ContainerStarted","Data":"15426a6ab4d45af37c4415df495b075538111dc47d3de3cd9e5cc2ece82fb0d3"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.013465 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.030042 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042820 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042906 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.050560 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.064806 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.077978 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.089811 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104742 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104776 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104792 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104823 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cnibin\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104857 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104910 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104972 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105017 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdc79\" (UniqueName: \"kubernetes.io/projected/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-kube-api-access-cdc79\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-os-release\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105065 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-system-cni-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105104 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105123 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105189 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105214 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105261 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105281 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105301 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105366 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105387 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105497 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105523 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105545 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105700 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cnibin\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105736 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105768 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105804 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105833 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105866 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-os-release\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105890 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-system-cni-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105933 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105967 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106553 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106916 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.107260 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.110419 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.110670 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.125097 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.126161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.140912 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145563 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.155485 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.169509 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.180143 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.196449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.217782 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.217853 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.231476 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: W0202 10:39:09.237434 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2642ee4e_c16a_4e6e_9654_a67666f1bff8.slice/crio-db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6 WatchSource:0}: Error finding container db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6: Status 404 returned error can't find the container with id db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6 Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.246750 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.247440 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.247599 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248255 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248286 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.252900 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.264688 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.275784 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351654 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351712 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.402664 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.410064 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.410112 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfn4s\" (UniqueName: \"kubernetes.io/projected/7919e98f-cc47-4f3c-9c53-6313850ea7b8-kube-api-access-sfn4s\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.416027 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cni-binary-copy\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.418418 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.444777 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.446290 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7919e98f-cc47-4f3c-9c53-6313850ea7b8-mcd-auth-proxy-config\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.452387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.452558 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.455159 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdc79\" (UniqueName: \"kubernetes.io/projected/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-kube-api-access-cdc79\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459252 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.460921 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nrfs\" (UniqueName: \"kubernetes.io/projected/04d9744a-e730-45b4-9f0c-bbb5b02cd311-kube-api-access-7nrfs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.492366 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: W0202 10:39:09.502536 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1edc5703_bb51_4f8a_9b73_68ba48a40ce8.slice/crio-856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49 WatchSource:0}: Error finding container 856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49: Status 404 returned error can't find the container with id 856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49 Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562403 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665338 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.668325 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.678851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7919e98f-cc47-4f3c-9c53-6313850ea7b8-proxy-tls\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.696430 4782 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.696592 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config podName:04d9744a-e730-45b4-9f0c-bbb5b02cd311 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:10.196554565 +0000 UTC m=+30.080747281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config") pod "multus-fsqgq" (UID: "04d9744a-e730-45b4-9f0c-bbb5b02cd311") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.704440 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: W0202 10:39:09.714845 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7919e98f_cc47_4f3c_9c53_6313850ea7b8.slice/crio-5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52 WatchSource:0}: Error finding container 5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52: Status 404 returned error can't find the container with id 5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52 Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774718 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774806 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.777579 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:02:51.516018478 +0000 UTC Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.820772 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.820895 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878092 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878119 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.928759 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.018556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.018614 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.018627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.020207 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" exitCode=0 Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.020279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.020298 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.023118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.023206 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.039902 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.057604 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.078032 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.084932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.084987 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.085002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.085024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.085039 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.095625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.109579 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.128135 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.149491 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.163172 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.175409 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187405 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187928 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.209958 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.215409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.216121 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.225978 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.242247 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.255267 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.265580 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsqgq" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.275338 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296582 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296617 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.297171 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.317254 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.348059 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.366517 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.380908 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.394160 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398754 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.408248 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-thvm5"] Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.408655 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.410046 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411399 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411912 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411921 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411968 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.423750 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.439710 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.459889 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.479040 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.494659 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501293 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.505153 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70faa63d-a86d-45aa-b6fd-81fa90436da2-host\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518513 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70faa63d-a86d-45aa-b6fd-81fa90436da2-serviceca\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518630 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrknp\" (UniqueName: \"kubernetes.io/projected/70faa63d-a86d-45aa-b6fd-81fa90436da2-kube-api-access-vrknp\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518566 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.531445 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.545167 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.563016 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.581279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604092 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604115 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604736 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.616933 4782 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619015 4782 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619789 4782 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619829 4782 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619853 4782 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619873 4782 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.619891 4782 request.go:1255] Unexpected error when reading response body: read tcp 38.102.83.147:35420->38.102.83.147:6443: use of closed network connection Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.619925 4782 status_manager.go:851] "Failed to get status for pod" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" pod="openshift-multus/multus-fsqgq" err="unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.147:35420->38.102.83.147:6443: use of closed network connection" Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.620264 4782 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.620500 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/events\": read tcp 38.102.83.147:35420->38.102.83.147:6443: use of closed network connection" event="&Event{ObjectMeta:{ovnkube-node-prbrn.189067c54a09181c openshift-ovn-kubernetes 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ovn-kubernetes,Name:ovnkube-node-prbrn,UID:2642ee4e-c16a-4e6e-9654-a67666f1bff8,APIVersion:v1,ResourceVersion:26753,FieldPath:spec.containers{ovn-acl-logging},},Reason:Started,Message:Started container ovn-acl-logging,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:39:10.604933148 +0000 UTC m=+30.489125864,LastTimestamp:2026-02-02 10:39:10.604933148 +0000 UTC m=+30.489125864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621833 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrknp\" (UniqueName: \"kubernetes.io/projected/70faa63d-a86d-45aa-b6fd-81fa90436da2-kube-api-access-vrknp\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621861 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70faa63d-a86d-45aa-b6fd-81fa90436da2-host\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70faa63d-a86d-45aa-b6fd-81fa90436da2-serviceca\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621999 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70faa63d-a86d-45aa-b6fd-81fa90436da2-host\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.623262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70faa63d-a86d-45aa-b6fd-81fa90436da2-serviceca\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.684498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrknp\" (UniqueName: \"kubernetes.io/projected/70faa63d-a86d-45aa-b6fd-81fa90436da2-kube-api-access-vrknp\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.690231 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706511 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706565 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706573 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.721026 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.724557 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.732652 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70faa63d_a86d_45aa_b6fd_81fa90436da2.slice/crio-7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae WatchSource:0}: Error finding container 7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae: Status 404 returned error can't find the container with id 7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.739923 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.768485 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.778675 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:58:14.871934673 +0000 UTC Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.782319 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.802787 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.808970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809036 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.817791 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.820209 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.820294 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.820348 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.820386 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.828753 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.846618 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.865616 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.883571 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.902625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911443 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.914066 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.931998 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.950795 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.966979 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.983857 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.995589 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.007064 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015906 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.022562 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.026396 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-thvm5" event={"ID":"70faa63d-a86d-45aa-b6fd-81fa90436da2","Type":"ContainerStarted","Data":"bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.026436 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-thvm5" event={"ID":"70faa63d-a86d-45aa-b6fd-81fa90436da2","Type":"ContainerStarted","Data":"7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.027906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.027964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"a93dd65fa4a1454836b2ef587d82693a6b41702e637b198b7a7758187c0b626b"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032235 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032287 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032295 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.033986 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d" exitCode=0 Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.034010 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.044967 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.058625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.080618 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.092477 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.111988 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.118835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.125939 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.159818 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.202394 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221261 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221312 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.239086 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.279079 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323682 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323707 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.326392 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.361826 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.399102 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425902 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.441010 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.486138 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.521191 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527943 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.562020 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.602257 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.610500 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630271 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.658958 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732547 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732555 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.779278 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:48:40.23479858 +0000 UTC Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.820753 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:11 crc kubenswrapper[4782]: E0202 10:39:11.820871 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834370 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834408 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834452 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936925 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038502 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.041847 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.041884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.057828 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.074292 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.095289 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.109257 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.121755 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.124032 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.137031 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140318 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140343 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.151056 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.159361 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.163503 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.174194 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.193299 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.204332 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.223451 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.237431 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241938 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241950 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.250414 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.278307 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.319350 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344176 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344230 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344255 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.361315 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.437259 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.441611 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446950 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446978 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.476777 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.503818 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.540532 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549863 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.550014 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.578980 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.618266 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652665 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652819 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.663440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.731706 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.745059 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754531 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.780218 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:18:42.885328311 +0000 UTC Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.784045 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.820708 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.820723 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:12 crc kubenswrapper[4782]: E0202 10:39:12.820838 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.820904 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: E0202 10:39:12.820936 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858872 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.875251 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.901171 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961755 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.046910 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353" exitCode=0 Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.046954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.062283 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.075544 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.082765 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.089281 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.100825 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.123448 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.160362 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.165729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.165885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.166006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.166118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.166225 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.199905 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.238593 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.268980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269041 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.277188 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.319568 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.365620 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371636 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371681 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.401925 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.440068 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477800 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.483563 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.524033 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579886 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579901 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579911 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682291 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.780364 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:17:17.649233049 +0000 UTC Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.820410 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:13 crc kubenswrapper[4782]: E0202 10:39:13.820519 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886453 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989196 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989261 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.053724 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.055379 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335" exitCode=0 Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.055408 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.072508 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092901 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092996 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.097186 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.111324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.122573 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.138377 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.149252 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.162908 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.178490 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.191984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199377 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199399 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.205513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.219462 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.235516 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.256810 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.271364 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.289720 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302999 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609517 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609546 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712072 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.780792 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:37:11.15392454 +0000 UTC Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814657 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814706 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814734 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.820258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.820357 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:14 crc kubenswrapper[4782]: E0202 10:39:14.820402 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:14 crc kubenswrapper[4782]: E0202 10:39:14.820523 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917273 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917307 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.061248 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df" exitCode=0 Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.061296 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.075938 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.092055 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.105826 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.117605 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121165 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.128697 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.143469 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.153015 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.172012 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.185626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.199314 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.209886 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223150 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223161 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.237970 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.252159 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.263598 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.276984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.324970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325056 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.426956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.426995 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.427003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.427018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.427027 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529109 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631858 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631914 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733549 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733586 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.781112 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:49:46.930284221 +0000 UTC Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.820465 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:15 crc kubenswrapper[4782]: E0202 10:39:15.820567 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.836947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837276 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.939960 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940038 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940053 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042405 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.067074 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.069404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.089992 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.102109 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.113680 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.123669 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.133948 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.144958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.144991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.144999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.145012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.145035 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.148437 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.162065 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.176745 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.196019 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.209225 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.221609 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.235513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.249070 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.262105 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.277986 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349875 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451822 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554169 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554196 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656771 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656800 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678226 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678202449 +0000 UTC m=+52.562395165 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678314 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678331 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678339 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678359 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678374 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678391 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678421 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678405154 +0000 UTC m=+52.562597930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678439 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678431495 +0000 UTC m=+52.562624211 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678453 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678478 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678494 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678502 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678494997 +0000 UTC m=+52.562687713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678505 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678537 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678528238 +0000 UTC m=+52.562721014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759136 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759161 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.781537 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:08:21.353181638 +0000 UTC Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.820987 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.821040 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.821113 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.821168 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861380 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861461 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964086 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964100 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964109 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066749 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.071875 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.071936 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.096099 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.111965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112008 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112058 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.113001 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.125656 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.127926 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.130976 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.130999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.131007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.131020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.131030 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.139984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.140042 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.140946 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.147449 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150283 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150840 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150891 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.161406 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165714 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165745 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165849 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.176445 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.177778 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180104 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180128 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.189491 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.191559 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.191674 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193107 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193139 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.208819 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.222045 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.233361 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.245263 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.257072 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.270169 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.283536 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295340 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.296495 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.312005 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.323264 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.333788 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.346684 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.358602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.371507 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.384440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398268 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.405537 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.448527 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.486836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501143 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501208 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501222 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.508073 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.524682 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.544925 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.558932 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603710 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603733 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603743 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.707036 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.782406 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:33:41.271003033 +0000 UTC Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810805 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.820620 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.820744 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913722 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016998 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.073621 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119418 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119497 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222419 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325246 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325304 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428192 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428207 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.530896 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531365 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531685 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634370 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.736954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.736993 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.737004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.737018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.737029 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.783766 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:06:37.58175399 +0000 UTC Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.820458 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.820511 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:18 crc kubenswrapper[4782]: E0202 10:39:18.820575 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:18 crc kubenswrapper[4782]: E0202 10:39:18.820628 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839382 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839583 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839604 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941624 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941662 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043441 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.081850 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255" exitCode=0 Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.081937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.082020 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.096527 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.110222 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.124279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.135854 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145488 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145601 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.147634 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.162624 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.182436 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.196240 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.208969 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.219410 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.231798 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247483 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247492 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.251166 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.263031 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.275382 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.292310 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350729 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453153 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555978 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659106 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761601 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.783923 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:15:05.083429233 +0000 UTC Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.820512 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:19 crc kubenswrapper[4782]: E0202 10:39:19.820709 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864141 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864152 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966305 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966336 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.068766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069109 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069123 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069149 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.088221 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d" exitCode=0 Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.088275 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.108977 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.125588 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.138514 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.150380 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.160323 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171464 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171555 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.175133 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.187567 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.199169 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.219543 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.233441 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.246672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.261876 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273833 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273842 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.277142 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.288076 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.304980 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376190 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376199 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376222 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478227 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580617 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686738 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686786 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.784307 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:06:26.357072927 +0000 UTC Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789178 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.820537 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.820675 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:20 crc kubenswrapper[4782]: E0202 10:39:20.820763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:20 crc kubenswrapper[4782]: E0202 10:39:20.820952 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.834526 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.845585 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.861376 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.878277 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892534 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892665 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.893947 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.910602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.931157 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.946846 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.966197 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.985513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995507 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995548 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.997452 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.014001 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.030252 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.044495 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.065536 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.096340 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099449 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099501 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.110778 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.121587 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.140809 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.161664 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.184161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.197840 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.202031 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.208823 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.219438 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.237799 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.251576 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.263805 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.281505 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.299448 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304287 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.314269 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.332509 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406615 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.462072 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn"] Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.462821 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.466363 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.466659 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.481845 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.494515 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509368 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509932 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.510237 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.524331 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.537814 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.551660 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.565968 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.576309 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.595471 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.609822 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612089 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.622247 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628332 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628373 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkkz\" (UniqueName: \"kubernetes.io/projected/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-kube-api-access-gxkkz\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628407 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628530 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.633282 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.651545 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.665406 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.679035 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.693258 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714841 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714868 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729520 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729564 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxkkz\" (UniqueName: \"kubernetes.io/projected/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-kube-api-access-gxkkz\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729592 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729620 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.730432 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.730504 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.735514 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.746432 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxkkz\" (UniqueName: \"kubernetes.io/projected/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-kube-api-access-gxkkz\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.776573 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.784602 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:42:07.037881664 +0000 UTC Feb 02 10:39:21 crc kubenswrapper[4782]: W0202 10:39:21.789885 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod324c55ff_8d31_4452_bb4e_2a57fbdb23c7.slice/crio-13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177 WatchSource:0}: Error finding container 13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177: Status 404 returned error can't find the container with id 13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177 Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.821787 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:21 crc kubenswrapper[4782]: E0202 10:39:21.822164 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.826057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.826407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.826527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.827865 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.828001 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930397 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.032849 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.033846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.034031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.034245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.034414 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.101158 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" event={"ID":"324c55ff-8d31-4452-bb4e-2a57fbdb23c7","Type":"ContainerStarted","Data":"13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137829 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244692 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244740 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346989 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449533 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.537707 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tv4xc"] Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.538520 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.538621 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552168 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552190 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.582746 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.631262 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.638880 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.638915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpm5q\" (UniqueName: \"kubernetes.io/projected/4e23db96-3af7-4c29-b00f-5920a9431f01-kube-api-access-gpm5q\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.643539 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.654203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655449 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.664328 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.678376 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.691613 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.704231 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.719086 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.732145 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.740246 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.740288 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpm5q\" (UniqueName: \"kubernetes.io/projected/4e23db96-3af7-4c29-b00f-5920a9431f01-kube-api-access-gpm5q\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.740473 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.740570 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:23.240549411 +0000 UTC m=+43.124742217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.749315 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758168 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758180 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758202 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.761676 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpm5q\" (UniqueName: \"kubernetes.io/projected/4e23db96-3af7-4c29-b00f-5920a9431f01-kube-api-access-gpm5q\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.762776 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.775044 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.785068 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:41:32.462083146 +0000 UTC Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.792397 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.807365 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.820085 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.820147 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.820197 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.820256 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.821821 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.842087 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.853686 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860620 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860632 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962804 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065450 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065494 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065506 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065535 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.105976 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/0.log" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.108948 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4" exitCode=1 Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.109203 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.110357 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" event={"ID":"324c55ff-8d31-4452-bb4e-2a57fbdb23c7","Type":"ContainerStarted","Data":"025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.111214 4782 scope.go:117] "RemoveContainer" containerID="6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.125492 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.140365 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.156573 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167967 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167993 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.170861 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.183481 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.208155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.221831 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.233430 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.242622 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.245145 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:23 crc kubenswrapper[4782]: E0202 10:39:23.245261 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:23 crc kubenswrapper[4782]: E0202 10:39:23.245309 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:24.245293445 +0000 UTC m=+44.129486161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.254717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.268108 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269886 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.279053 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.288674 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.304478 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.317834 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.330659 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.344900 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372808 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475732 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581337 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684142 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.785183 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:18:18.714988067 +0000 UTC Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786714 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786738 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.820662 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:23 crc kubenswrapper[4782]: E0202 10:39:23.821049 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.821184 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.889925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.889989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.890002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.890023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.890033 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993684 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993696 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993722 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096939 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.115053 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" event={"ID":"324c55ff-8d31-4452-bb4e-2a57fbdb23c7","Type":"ContainerStarted","Data":"1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.116796 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/0.log" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.119336 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.119510 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.122127 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.124165 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.137754 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.151532 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.162933 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.173873 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.182985 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.195391 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199377 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199399 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199411 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.208095 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.218994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.237050 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.249188 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.258497 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.258600 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.258666 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:26.25863497 +0000 UTC m=+46.142827686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.261255 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.272836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.281512 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.293619 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301622 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301716 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.307135 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.320418 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.331206 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.352043 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.364786 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.377331 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.386757 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.396207 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403738 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403759 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.408825 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.419305 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.429879 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.449493 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.467803 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.481129 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.497681 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506035 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506089 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506099 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.510757 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.527345 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.538062 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.552056 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.606777 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.607994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608074 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710366 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.785682 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:11:29.534160788 +0000 UTC Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.812996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813066 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.820349 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.820357 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.820455 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.820485 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.820544 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.820731 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915064 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915107 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017370 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017383 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119877 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.150615 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.168792 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.185693 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.198100 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.207917 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222483 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222511 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222880 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.235954 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.247088 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.265016 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.278892 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.292343 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.307788 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.320452 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324365 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.332936 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.352075 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.362994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.375129 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426878 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528701 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631240 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733583 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733593 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.786303 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:04:46.937516271 +0000 UTC Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.820791 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:25 crc kubenswrapper[4782]: E0202 10:39:25.820952 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835901 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835912 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835946 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937849 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937859 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040309 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040349 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040359 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040376 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040387 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.132085 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.132916 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/0.log" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.135252 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" exitCode=1 Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.135296 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.135337 4782 scope.go:117] "RemoveContainer" containerID="6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.136123 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.136332 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145551 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145562 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.150728 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.167418 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.181836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.196118 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.210171 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.221863 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.237528 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.248007 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.251335 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.261066 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.270044 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.280613 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.280804 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.280871 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:30.280854534 +0000 UTC m=+50.165047350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.282802 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.292865 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.302700 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.313033 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.335324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.347731 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350338 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.360194 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452682 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554771 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554781 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657260 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759466 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.786870 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:57:00.268116208 +0000 UTC Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.820230 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.820275 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.820365 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.820382 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.820521 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.820587 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861718 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861755 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964399 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964424 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066767 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066831 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.138899 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169914 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272787 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272840 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375706 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375737 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478670 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478753 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593349 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593372 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594366 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.628820 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634825 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634881 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634932 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.658975 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666408 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666445 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.684673 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693823 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693843 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693857 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.710883 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715530 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.731778 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.732045 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737257 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737318 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.787116 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:59:22.24361391 +0000 UTC Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.820566 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.820791 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840359 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840449 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840483 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946479 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050359 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050380 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050393 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152728 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255696 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255814 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.358484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.358944 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.359080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.359263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.359458 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462958 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566200 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670167 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670182 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.773779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774202 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774460 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.788276 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:26:17.480699684 +0000 UTC Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.821018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.821018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.821074 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:28 crc kubenswrapper[4782]: E0202 10:39:28.821820 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:28 crc kubenswrapper[4782]: E0202 10:39:28.821901 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:28 crc kubenswrapper[4782]: E0202 10:39:28.822008 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878252 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878299 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.981922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.981979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.981990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.982010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.982023 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085613 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085679 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085699 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188865 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188878 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291933 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291950 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395373 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497741 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600397 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600435 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600468 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703979 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.788759 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:30:06.360336849 +0000 UTC Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807370 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.834129 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:29 crc kubenswrapper[4782]: E0202 10:39:29.834276 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.909907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910535 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013387 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013396 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116513 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116523 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218810 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320709 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.339103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.339252 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.339352 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:38.339326795 +0000 UTC m=+58.223519581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423312 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423321 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525086 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627143 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729434 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.789487 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:39:24.813887658 +0000 UTC Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.794778 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.795762 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.795931 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.806558 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.816316 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.820592 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.820592 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.820702 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.820745 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.820608 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.821059 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831744 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831801 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.839140 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.853080 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.867276 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.878630 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.896046 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.909069 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.921519 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.933131 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934201 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934223 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.945207 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.959083 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.983445 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.996794 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.006960 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.016995 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.027946 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036796 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.038827 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.050523 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.061500 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.071332 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.084504 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.095184 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.106528 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.126203 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138783 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.139079 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.149173 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.160057 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.169686 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.190535 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.203828 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.215666 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.226717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241331 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241389 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241399 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.244227 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343866 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.379633 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.390379 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.400342 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.412751 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.422866 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.432679 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.444513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.446009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.458708 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.472318 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.485419 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.495322 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.508990 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.520964 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.534759 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.544279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.548009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.566671 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.578902 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.589392 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.598958 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.753943 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754118 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.790132 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:39:26.043592754 +0000 UTC Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.820676 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:31 crc kubenswrapper[4782]: E0202 10:39:31.820790 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.855908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.855964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.855980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.856001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.856015 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958625 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958635 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.001128 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060664 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060696 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163169 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163177 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.265573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.265905 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.266189 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.266319 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.266431 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368864 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368938 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470729 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573814 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676939 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.677053 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762928 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762957 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762981 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.763001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763049 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763022319 +0000 UTC m=+84.647215025 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763092 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763133 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763139 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763123812 +0000 UTC m=+84.647316518 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763171 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763164273 +0000 UTC m=+84.647356989 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763257 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763270 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763280 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763296167 +0000 UTC m=+84.647488883 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763347 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763356 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763363 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763381 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763375579 +0000 UTC m=+84.647568295 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.778991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779052 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.791119 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:00:15.446431563 +0000 UTC Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.820581 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.820619 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.820581 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.820700 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.820769 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.820815 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881722 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881760 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984883 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984907 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087820 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189878 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189891 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292567 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292698 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395151 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497383 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599870 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599903 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.791298 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:22:55.988282642 +0000 UTC Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804427 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.820677 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:33 crc kubenswrapper[4782]: E0202 10:39:33.820804 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906406 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008845 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111300 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213810 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316896 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.418555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.418877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.418986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.419076 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.419168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521096 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623487 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623497 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623523 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726130 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.791840 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:37:30.298712448 +0000 UTC Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.820449 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.820455 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:34 crc kubenswrapper[4782]: E0202 10:39:34.820867 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.820513 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:34 crc kubenswrapper[4782]: E0202 10:39:34.821202 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:34 crc kubenswrapper[4782]: E0202 10:39:34.821053 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828444 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931237 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931250 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033931 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135995 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238837 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340858 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340867 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.442965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.442996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.443005 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.443018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.443028 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545665 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545707 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751543 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.792595 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:55:18.787932906 +0000 UTC Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.820606 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:35 crc kubenswrapper[4782]: E0202 10:39:35.820792 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854173 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957107 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059319 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059602 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059959 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163318 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163331 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163377 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266664 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266758 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370305 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370388 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370400 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479260 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582600 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684915 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684957 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788620 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.792793 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:41:57.526518919 +0000 UTC Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.820830 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.820850 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.820897 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:36 crc kubenswrapper[4782]: E0202 10:39:36.820967 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:36 crc kubenswrapper[4782]: E0202 10:39:36.821063 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:36 crc kubenswrapper[4782]: E0202 10:39:36.821184 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891974 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995169 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.098966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099068 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202405 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202435 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305340 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408563 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408640 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512194 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512257 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512288 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614537 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614556 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716552 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.793198 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:38:21.279790788 +0000 UTC Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819122 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819148 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.820196 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:37 crc kubenswrapper[4782]: E0202 10:39:37.820291 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921309 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921372 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921380 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015417 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.029591 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033462 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.044382 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047853 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047916 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.060455 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064111 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064133 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.077800 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085986 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.099314 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.099461 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101052 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203755 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203838 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306306 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408722 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408769 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.422245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.422507 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.422688 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:54.422628627 +0000 UTC m=+74.306821533 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.511935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512029 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615875 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718779 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.793948 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:24:00.802416398 +0000 UTC Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.820763 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.820820 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.820904 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.820976 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821033 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.821122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821274 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821293 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924615 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924717 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027640 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027666 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027690 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130853 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233035 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233048 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233071 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336492 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336524 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438822 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542259 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.644998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645122 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748372 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748383 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.794130 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:41:35.690558828 +0000 UTC Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.820673 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:39 crc kubenswrapper[4782]: E0202 10:39:39.820794 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850908 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.953955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954260 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954276 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057421 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057448 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160625 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160634 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263456 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366878 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470340 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470471 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574927 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679563 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679689 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782718 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.795279 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:30:09.217440259 +0000 UTC Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.820153 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.820182 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.820199 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:40 crc kubenswrapper[4782]: E0202 10:39:40.820280 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:40 crc kubenswrapper[4782]: E0202 10:39:40.820446 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:40 crc kubenswrapper[4782]: E0202 10:39:40.820575 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.845802 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.864295 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.878449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888367 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.896516 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.909060 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.925431 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.943051 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.959707 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.987588 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990465 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990476 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.002886 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.019638 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.034017 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.049902 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.066456 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.082351 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093569 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.098567 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.117904 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.129212 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196896 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196978 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299109 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401377 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401388 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401403 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401414 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.503920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504067 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606944 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606973 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606993 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710534 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710549 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710559 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.796219 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:52:19.632419508 +0000 UTC Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813077 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.820929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:41 crc kubenswrapper[4782]: E0202 10:39:41.821064 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916426 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.006532 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019620 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.025950 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.046793 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.063105 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.087669 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.103698 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.120565 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.122247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.122411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.122871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.123268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.123676 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.134724 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.159899 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.181918 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.198530 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.211916 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.224861 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226993 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.236480 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.253197 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.265819 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.278058 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.294350 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.311420 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329515 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432224 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432317 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534978 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638258 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.740991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741054 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.797287 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:13:38.17618963 +0000 UTC Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.820234 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.820265 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:42 crc kubenswrapper[4782]: E0202 10:39:42.820375 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.820425 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:42 crc kubenswrapper[4782]: E0202 10:39:42.820500 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:42 crc kubenswrapper[4782]: E0202 10:39:42.820579 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.821348 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.845987 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846084 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949503 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949537 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051542 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.153969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154036 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154047 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.191768 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.195507 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.196287 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.209691 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.220810 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.230334 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256679 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256886 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256923 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256950 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256961 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.272732 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.287452 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.298173 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.321930 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.339220 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.355420 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359405 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.369810 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.382278 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.393390 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.406439 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.415733 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.424935 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.436554 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.445994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461927 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461939 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564163 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564190 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564199 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666601 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666661 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.797878 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:42:13.845368485 +0000 UTC Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.820112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:43 crc kubenswrapper[4782]: E0202 10:39:43.820248 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871543 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871585 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.974977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975050 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077551 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.179711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.179982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.179990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.180003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.180012 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.199532 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.199966 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.202520 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" exitCode=1 Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.202556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.202594 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.203432 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.204079 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.220312 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.237160 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.249733 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.264690 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.276122 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281467 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281478 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.287920 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.299429 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.311614 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.322353 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.332914 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.344876 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.357890 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.372298 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.382484 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383756 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383769 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.394368 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.411866 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.427197 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.438071 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.485947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.485975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.485983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.486011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.486020 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588889 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588919 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691407 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794095 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794119 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794129 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.798373 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:49:25.390452457 +0000 UTC Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.820679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.820812 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.820878 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.820679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.821004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.821060 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.895980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896172 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998148 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100980 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.203933 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.203968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.203989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.204016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.204029 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.207378 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.210838 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:39:45 crc kubenswrapper[4782]: E0202 10:39:45.211036 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.225481 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.239690 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.255280 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.278829 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.291294 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.304541 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306125 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.324794 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.347934 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.362373 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.375057 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.390870 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.403862 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408696 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.417618 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.438626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.455690 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.470079 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.483498 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.496133 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511235 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511246 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613916 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717971 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.799387 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:00:14.504754402 +0000 UTC Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820153 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820354 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820430 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: E0202 10:39:45.820816 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922082 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922142 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922165 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.024512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.024789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.024911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.025003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.025124 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128685 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.231894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232554 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335232 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335363 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438657 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.439283 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541643 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644597 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644703 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747110 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747153 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747193 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.799569 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:01:15.408211465 +0000 UTC Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.820946 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:46 crc kubenswrapper[4782]: E0202 10:39:46.821079 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.821150 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:46 crc kubenswrapper[4782]: E0202 10:39:46.821270 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.821531 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:46 crc kubenswrapper[4782]: E0202 10:39:46.821819 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.832830 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.849481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.849793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.849917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.850069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.850189 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952784 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055964 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.158784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159474 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261340 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261349 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261375 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.363699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364229 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364333 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466870 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569833 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569865 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569882 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672813 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.775722 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.775981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.776060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.776129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.776207 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.800218 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:29:50.75133862 +0000 UTC Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.820771 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:47 crc kubenswrapper[4782]: E0202 10:39:47.820903 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878196 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878630 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981614 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084141 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187480 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187544 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292671 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429725 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.443919 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447843 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447854 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.460908 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.465784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.465916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.466001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.466082 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.466145 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.480029 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483772 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.496983 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500953 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.501019 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.513507 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.513628 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.514981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515036 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515067 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617221 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617270 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617295 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.718861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719255 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719324 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.801133 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:22:54.873496645 +0000 UTC Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.820184 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.820255 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.820441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.820764 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.820654 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.821041 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821425 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821452 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.923937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.923997 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.924042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.924060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.924070 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026238 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026319 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.128959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.128994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.129003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.129016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.129025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335131 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437531 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540478 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540547 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540563 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644714 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747997 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.801790 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:23:48.074524929 +0000 UTC Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.820195 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:49 crc kubenswrapper[4782]: E0202 10:39:49.820313 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.850957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851038 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851050 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953070 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055261 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157860 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259988 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259997 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464744 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464809 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566863 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566895 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669714 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771955 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.802233 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:19:45.252441173 +0000 UTC Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.820611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.820654 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.820738 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:50 crc kubenswrapper[4782]: E0202 10:39:50.821123 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:50 crc kubenswrapper[4782]: E0202 10:39:50.821012 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:50 crc kubenswrapper[4782]: E0202 10:39:50.821254 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.838555 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.850167 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.863517 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875485 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875497 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.881435 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.894256 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.907915 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.938037 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976782 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976815 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.997095 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.010002 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.020572 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.031785 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.043700 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.060349 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.073109 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.078952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.078989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.079002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.079019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.079029 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.147061 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.169089 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181029 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181083 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181108 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.182484 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.193958 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.211432 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.283962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.283998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.284009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.284024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.284034 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.386961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387027 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387095 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490501 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.592936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.592994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.593004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.593018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.593027 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696397 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696411 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799068 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799111 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799141 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.803190 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:05:49.677161228 +0000 UTC Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.820418 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:51 crc kubenswrapper[4782]: E0202 10:39:51.820533 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.901893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.901989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.902006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.902024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.902041 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005649 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005660 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.107954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108039 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210636 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210684 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313620 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313697 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416598 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416650 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416681 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519341 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519368 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621505 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.723993 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.804073 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:05:59.754969304 +0000 UTC Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.820904 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.820924 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:52 crc kubenswrapper[4782]: E0202 10:39:52.821106 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.821198 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:52 crc kubenswrapper[4782]: E0202 10:39:52.821300 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:52 crc kubenswrapper[4782]: E0202 10:39:52.821393 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830201 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830294 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038791 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038830 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141482 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244199 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244242 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346752 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346773 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346782 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.448965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449027 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449054 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551260 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551269 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653652 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756382 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.804861 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:30:34.6657645 +0000 UTC Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.820243 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:53 crc kubenswrapper[4782]: E0202 10:39:53.820371 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858772 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961174 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961185 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961201 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961212 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064960 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064991 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167164 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270621 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372963 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476299 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.491851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.492048 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.492154 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:26.492130179 +0000 UTC m=+106.376322895 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580294 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580328 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683505 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683514 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785996 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.805457 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:32:11.050291067 +0000 UTC Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.820801 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.820827 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.820814 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.820924 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.821004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.821071 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891684 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891752 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994478 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097814 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201814 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201840 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201854 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305421 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409179 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.512687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513513 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.616882 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617732 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721597 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721654 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.805581 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:26:23.367748959 +0000 UTC Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.821140 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:55 crc kubenswrapper[4782]: E0202 10:39:55.821321 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824909 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928072 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031071 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031143 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031154 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134270 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134359 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237305 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339882 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339914 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443189 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443203 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546312 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546409 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.650506 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.650953 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.651066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.651160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.651265 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.754619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755706 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755915 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.806175 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:06:02.522544272 +0000 UTC Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.820775 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.820827 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.820858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:56 crc kubenswrapper[4782]: E0202 10:39:56.820931 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:56 crc kubenswrapper[4782]: E0202 10:39:56.821031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:56 crc kubenswrapper[4782]: E0202 10:39:56.821109 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859340 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859773 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966555 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068968 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171790 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273905 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378733 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482511 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482542 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482556 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586599 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586611 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.689968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690071 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690090 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792232 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792246 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.806555 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:25:16.886851728 +0000 UTC Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.820908 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:57 crc kubenswrapper[4782]: E0202 10:39:57.821084 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895260 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999091 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999102 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102375 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102397 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205797 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309259 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309302 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412540 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412567 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515595 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579417 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579479 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.592054 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595082 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595104 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.606850 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610223 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.622567 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626481 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.640167 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644064 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644087 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.659371 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.659592 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661858 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764971 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.807178 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:52:40.861644665 +0000 UTC Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.820606 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.820901 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.821017 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.821147 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.821249 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.821300 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868557 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868576 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972123 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972138 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.074990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075072 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178825 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178948 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282356 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386161 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491213 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491224 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.595004 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.698423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.698848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.698919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.699070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.699171 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.802527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.802809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.802925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.803017 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.803103 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.807869 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:00:46.031524147 +0000 UTC Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.820162 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:59 crc kubenswrapper[4782]: E0202 10:39:59.820330 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906489 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010228 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010280 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113418 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.216604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217296 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.320736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321293 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424752 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528683 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632611 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736269 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736283 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.809347 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:25:54.541697627 +0000 UTC Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.821015 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.821223 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.821525 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.822028 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.822031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.822196 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.823092 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.823492 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.839711 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840610 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840912 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.858154 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.874309 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.887455 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.904977 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.919168 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.932268 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.943672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.943925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.944011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.944113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.944265 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.948849 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.963333 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.984065 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.999149 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.017155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.033430 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047244 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047295 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047339 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.054023 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.070872 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.088817 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.105266 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.123519 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.146603 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150182 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252693 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.260527 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/0.log" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.260579 4782 generic.go:334] "Generic (PLEG): container finished" podID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" containerID="9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937" exitCode=1 Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.260613 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerDied","Data":"9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.261043 4782 scope.go:117] "RemoveContainer" containerID="9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.283231 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.295555 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.308910 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.321573 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.334463 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.345666 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355683 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355729 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.357888 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.370707 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.381156 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.389449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.399513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.411713 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.423911 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.440308 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.451440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458086 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458150 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.474664 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.489748 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.505018 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.517003 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.560973 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.663996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664080 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766787 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766861 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.810431 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:45:12.209195467 +0000 UTC Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.820773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:01 crc kubenswrapper[4782]: E0202 10:40:01.820902 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872153 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975354 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077584 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077595 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180257 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180269 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180298 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.265461 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/0.log" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.265535 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.278091 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282238 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282268 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.290484 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.302943 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.316967 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.330722 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.341245 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.354621 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.365930 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.378946 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384389 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384504 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.404593 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.418612 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.431726 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.441760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.452782 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.464347 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.480161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487722 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.494601 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.507333 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.529045 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591457 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591468 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694200 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.796961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.796990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.797000 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.797013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.797023 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.810922 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:29:14.163688219 +0000 UTC Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.820629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:02 crc kubenswrapper[4782]: E0202 10:40:02.820856 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.820973 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:02 crc kubenswrapper[4782]: E0202 10:40:02.821363 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.821567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:02 crc kubenswrapper[4782]: E0202 10:40:02.821944 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899214 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899245 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001375 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001450 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103756 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103788 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205527 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309807 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432142 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432181 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535253 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535346 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639488 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744134 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.811945 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:31:54.871479391 +0000 UTC Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.820316 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:03 crc kubenswrapper[4782]: E0202 10:40:03.820547 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.847770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848416 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951683 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054371 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157756 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157854 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261226 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.363774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364357 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.466872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467598 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467837 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570432 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673859 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775923 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775967 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.812422 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:30:49.408254869 +0000 UTC Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.817897 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818015 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818077 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818099 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818215 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818259 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818246724 +0000 UTC m=+148.702439440 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818408 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818513 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818548 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818560 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818433 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818595 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818602 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818474 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.81846804 +0000 UTC m=+148.702660756 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818630 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818620825 +0000 UTC m=+148.702813541 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818670 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818635695 +0000 UTC m=+148.702828411 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818684 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818678636 +0000 UTC m=+148.702871352 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.821012 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.821091 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.821014 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.821336 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.821209 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.821571 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.878789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879133 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879319 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982508 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982538 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085193 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187870 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187996 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290090 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290139 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393258 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495505 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597854 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597873 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700724 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700799 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803099 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803109 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803134 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.813488 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:41:09.777358582 +0000 UTC Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.820966 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:05 crc kubenswrapper[4782]: E0202 10:40:05.821158 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906520 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008954 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113341 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113476 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216733 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216955 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.319852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320517 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320831 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.424754 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425629 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.530018 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.633445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.633788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.633917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.634020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.634126 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737279 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737348 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.814174 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:47:22.92528083 +0000 UTC Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.820568 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:06 crc kubenswrapper[4782]: E0202 10:40:06.820772 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.820940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.820567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:06 crc kubenswrapper[4782]: E0202 10:40:06.821303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:06 crc kubenswrapper[4782]: E0202 10:40:06.821321 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.840879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.840935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.840949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.841015 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.841031 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046781 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149997 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253159 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355791 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458791 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458800 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562425 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562448 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664946 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766777 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.814968 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:05:01.502008235 +0000 UTC Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.820258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:07 crc kubenswrapper[4782]: E0202 10:40:07.820395 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972143 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074534 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176918 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176948 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.280668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281139 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281983 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385621 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489176 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489279 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593206 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.696927 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.696988 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.697003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.697033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.697047 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786175 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.802372 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806542 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806680 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.815525 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:26:08.861555919 +0000 UTC Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.820791 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.820992 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.821062 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.821249 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.821303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.821566 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.821944 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.827788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828296 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.874236 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879571 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879580 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.895274 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902435 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902537 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902581 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.917549 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.917738 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919541 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124864 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124939 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228622 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228843 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331556 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331676 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.536716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.536998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.537022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.537040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.537049 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639545 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639556 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743091 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743183 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.816962 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:32:25.974111583 +0000 UTC Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.820360 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:09 crc kubenswrapper[4782]: E0202 10:40:09.820763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.846720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847196 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847597 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951747 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.054947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.054986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.054996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.055011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.055023 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.157867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158543 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261826 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.364720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365621 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468692 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468852 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571997 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.572015 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674805 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674921 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674943 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778489 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778530 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.818327 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:57:51.324902372 +0000 UTC Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.820739 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:10 crc kubenswrapper[4782]: E0202 10:40:10.820893 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.820964 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:10 crc kubenswrapper[4782]: E0202 10:40:10.821044 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.822533 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:10 crc kubenswrapper[4782]: E0202 10:40:10.823412 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.840664 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.854722 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.868410 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881635 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881689 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.885737 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.896404 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.910759 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.923678 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.934626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.955897 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.970718 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983781 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.985938 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.998155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.020756 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.031896 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.048506 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.065828 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.079789 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087343 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.099159 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.115595 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190368 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292720 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394949 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497375 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600110 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703759 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703786 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806338 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.819560 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:59:26.955982449 +0000 UTC Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.820664 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:11 crc kubenswrapper[4782]: E0202 10:40:11.821112 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.821534 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.909826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910485 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015317 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015336 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120239 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223972 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223990 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.300516 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.304245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.304800 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.325672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326513 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326559 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326582 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.351760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.365950 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.392371 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.420169 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429392 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.442187 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.460012 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.476394 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.498382 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.511108 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.525939 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531717 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531754 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531787 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.542987 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.557549 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.571328 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.584077 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.601194 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.614613 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.630032 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637749 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.645427 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740632 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740698 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.823635 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:12 crc kubenswrapper[4782]: E0202 10:40:12.823933 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.824193 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.824172 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:34:38.557072502 +0000 UTC Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.824224 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:12 crc kubenswrapper[4782]: E0202 10:40:12.824487 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:12 crc kubenswrapper[4782]: E0202 10:40:12.824705 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843708 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947463 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049794 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152568 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.254998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255076 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.311026 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.312441 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.317012 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" exitCode=1 Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.317055 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.317121 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.318245 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:13 crc kubenswrapper[4782]: E0202 10:40:13.318434 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.338295 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.353185 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.369195 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.383771 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.398175 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.412160 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.430404 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.444003 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.456860 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464838 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.479434 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.496587 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.514790 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.530173 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.543678 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.557589 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567878 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567891 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.574949 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.588916 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.613030 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.636742 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:13Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0202 10:40:12.792477 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.792747 6839 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793003 6839 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793147 6839 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793807 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793810 6839 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:40:12.793956 6839 factory.go:656] Stopping watch factory\\\\nI0202 10:40:12.793991 6839 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:40:12.815740 6839 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0202 10:40:12.815893 6839 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0202 10:40:12.815971 6839 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:40:12.816011 6839 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:40:12.816108 6839 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671229 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671357 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774427 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774441 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.820496 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:13 crc kubenswrapper[4782]: E0202 10:40:13.820683 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.824433 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:23:51.531997056 +0000 UTC Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877253 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981150 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981163 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981198 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084715 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187970 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291182 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291194 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.322488 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.334735 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.335675 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.350071 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.364015 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.375068 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.387317 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.412728 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.428657 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.442836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.453526 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.469733 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:13Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0202 10:40:12.792477 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.792747 6839 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793003 6839 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793147 6839 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793807 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793810 6839 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:40:12.793956 6839 factory.go:656] Stopping watch factory\\\\nI0202 10:40:12.793991 6839 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:40:12.815740 6839 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0202 10:40:12.815893 6839 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0202 10:40:12.815971 6839 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:40:12.816011 6839 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:40:12.816108 6839 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.478823 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.489215 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495791 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495816 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.502143 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.513429 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.523847 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.535259 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.548569 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.562760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.574416 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.587720 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.598686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.598868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.598963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.599030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.599087 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702450 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702482 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805979 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.821887 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.822022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.822093 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.822141 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.822337 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.822405 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.824726 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:46:56.953021749 +0000 UTC Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907994 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010858 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113823 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113858 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.219014 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321560 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321598 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423849 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423881 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581143 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581171 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683874 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785939 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785966 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.820609 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:15 crc kubenswrapper[4782]: E0202 10:40:15.820763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.825776 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:40:55.111770736 +0000 UTC Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888133 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888246 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888258 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992174 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992183 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096136 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096149 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198606 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304674 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.407967 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408074 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510478 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510493 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614389 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614457 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614490 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718284 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.820865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.820927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.820865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:16 crc kubenswrapper[4782]: E0202 10:40:16.821122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:16 crc kubenswrapper[4782]: E0202 10:40:16.821122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:16 crc kubenswrapper[4782]: E0202 10:40:16.821188 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821494 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821564 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.826439 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:29:03.705135415 +0000 UTC Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924406 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027882 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131497 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131566 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234804 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234846 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337387 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337444 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441623 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441710 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441753 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441776 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544427 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544480 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544492 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544524 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646950 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749863 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749932 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.820823 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:17 crc kubenswrapper[4782]: E0202 10:40:17.821031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.826752 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:11:42.161232532 +0000 UTC Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.852961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853091 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.956891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.956964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.956982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.957006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.957025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059982 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163170 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163189 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266273 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266321 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266333 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370631 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475430 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580379 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684534 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684560 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684697 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789403 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789414 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.821042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:18 crc kubenswrapper[4782]: E0202 10:40:18.821224 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.821489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:18 crc kubenswrapper[4782]: E0202 10:40:18.821555 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.821800 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:18 crc kubenswrapper[4782]: E0202 10:40:18.821884 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.827680 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:45:33.680861242 +0000 UTC Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893177 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995884 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099084 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099124 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202426 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.300932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.300985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.301040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.301061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.301075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.318571 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.339137 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.343966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344056 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.358391 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363625 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363650 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.376364 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380758 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.392674 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.392803 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394918 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.497983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498094 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600589 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600616 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703764 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.806920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.806979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.806989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.807012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.807025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.820146 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.820554 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.828713 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:15:06.326244572 +0000 UTC Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.910974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911095 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911110 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014487 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014519 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117507 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117541 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323500 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427198 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531624 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531953 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636235 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742147 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742237 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742250 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.821869 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.822120 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:20 crc kubenswrapper[4782]: E0202 10:40:20.822238 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.822257 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:20 crc kubenswrapper[4782]: E0202 10:40:20.822402 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:20 crc kubenswrapper[4782]: E0202 10:40:20.822547 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.828889 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:43:56.053142303 +0000 UTC Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.838980 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845145 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.855921 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.878076 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:13Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0202 10:40:12.792477 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.792747 6839 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793003 6839 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793147 6839 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793807 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793810 6839 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:40:12.793956 6839 factory.go:656] Stopping watch factory\\\\nI0202 10:40:12.793991 6839 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:40:12.815740 6839 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0202 10:40:12.815893 6839 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0202 10:40:12.815971 6839 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:40:12.816011 6839 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:40:12.816108 6839 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.890284 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.905257 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.919770 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.935211 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948858 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948906 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.957530 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.974979 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.994035 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.009990 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.023095 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.036691 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052767 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052809 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.054760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.074433 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.088994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.101762 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.126914 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.143789 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156362 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156382 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156396 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.259929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.259985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.259998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.260017 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.260030 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362388 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362495 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466717 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466844 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570234 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570926 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.674741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.674989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.675003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.675025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.675040 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778399 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.820881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:21 crc kubenswrapper[4782]: E0202 10:40:21.821061 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.829880 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:06:34.729653964 +0000 UTC Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.882983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883115 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986612 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986683 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986734 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091386 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194726 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298017 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298115 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400238 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400308 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.502981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605493 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708557 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708681 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.811913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.820775 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.820881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:22 crc kubenswrapper[4782]: E0202 10:40:22.820945 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:22 crc kubenswrapper[4782]: E0202 10:40:22.821008 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.821480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:22 crc kubenswrapper[4782]: E0202 10:40:22.821840 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.830698 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:53:23.132359836 +0000 UTC Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.915920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916502 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019551 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019575 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123534 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232361 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.335764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336182 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336290 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336569 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441485 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.544599 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545500 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.648757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649712 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753943 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753971 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.820918 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:23 crc kubenswrapper[4782]: E0202 10:40:23.821439 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.831893 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:22:57.310375393 +0000 UTC Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858899 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858964 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067368 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067410 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172515 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385342 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489202 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489271 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592244 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694497 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694554 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797568 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.821073 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.821117 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.821080 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:24 crc kubenswrapper[4782]: E0202 10:40:24.821221 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:24 crc kubenswrapper[4782]: E0202 10:40:24.821331 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:24 crc kubenswrapper[4782]: E0202 10:40:24.821383 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.833201 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:28:35.289084501 +0000 UTC Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900385 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.003568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004560 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107545 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107597 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211213 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314418 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417240 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417294 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417321 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519968 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622382 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622428 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725516 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.820517 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:25 crc kubenswrapper[4782]: E0202 10:40:25.820762 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829613 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829724 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829755 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.834015 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:45:59.448927555 +0000 UTC Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932627 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932674 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037290 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.141967 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142064 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245508 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245560 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349894 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452820 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452868 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.533445 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.533869 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.534030 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:30.533995332 +0000 UTC m=+170.418188078 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556924 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556942 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556953 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661328 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661380 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661400 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.766004 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.820558 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.820858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.820858 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.820946 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.821165 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.821867 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.822633 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.823012 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.834467 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:32:55.724144434 +0000 UTC Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868862 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.971932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.971980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.971990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.972006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.972016 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075111 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177503 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177544 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279954 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382841 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485807 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588981 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.692001 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795169 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.820136 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:27 crc kubenswrapper[4782]: E0202 10:40:27.820326 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.835209 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:34:53.674641704 +0000 UTC Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897666 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897716 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000328 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000358 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103171 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206420 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309242 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309253 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411485 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411511 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514242 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617176 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719986 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.820139 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.820199 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:28 crc kubenswrapper[4782]: E0202 10:40:28.820303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.820503 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:28 crc kubenswrapper[4782]: E0202 10:40:28.820550 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:28 crc kubenswrapper[4782]: E0202 10:40:28.820677 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821650 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821683 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.835618 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:23:02.808481367 +0000 UTC Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924029 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924090 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924146 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036781 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036811 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140537 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140626 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243886 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.244081 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347179 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347519 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450457 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450506 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553445 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657474 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657486 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728142 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.770900 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r"] Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.771321 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.772983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.774138 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.774217 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.775060 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.820949 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:29 crc kubenswrapper[4782]: E0202 10:40:29.821144 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.826266 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fsqgq" podStartSLOduration=81.826248062 podStartE2EDuration="1m21.826248062s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.826204051 +0000 UTC m=+109.710396767" watchObservedRunningTime="2026-02-02 10:40:29.826248062 +0000 UTC m=+109.710440798" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.836501 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:52:52.704495709 +0000 UTC Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.836560 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.845074 4782 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873250 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873286 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873406 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873470 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.883687 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.883613141 podStartE2EDuration="1m25.883613141s" podCreationTimestamp="2026-02-02 10:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.865188158 +0000 UTC m=+109.749380904" watchObservedRunningTime="2026-02-02 10:40:29.883613141 +0000 UTC m=+109.767805857" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.905994 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.905974755 podStartE2EDuration="58.905974755s" podCreationTimestamp="2026-02-02 10:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.884153516 +0000 UTC m=+109.768346242" watchObservedRunningTime="2026-02-02 10:40:29.905974755 +0000 UTC m=+109.790167471" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.930510 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podStartSLOduration=82.930492991 podStartE2EDuration="1m22.930492991s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.905819581 +0000 UTC m=+109.790012297" watchObservedRunningTime="2026-02-02 10:40:29.930492991 +0000 UTC m=+109.814685717" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.930614 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" podStartSLOduration=81.930609815 podStartE2EDuration="1m21.930609815s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.930029378 +0000 UTC m=+109.814222094" watchObservedRunningTime="2026-02-02 10:40:29.930609815 +0000 UTC m=+109.814802531" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.973946 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974203 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974279 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974379 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974458 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974552 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.975381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.979807 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.983852 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=85.983833595 podStartE2EDuration="1m25.983833595s" podCreationTimestamp="2026-02-02 10:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.980690596 +0000 UTC m=+109.864883312" watchObservedRunningTime="2026-02-02 10:40:29.983833595 +0000 UTC m=+109.868026311" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.984309 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" podStartSLOduration=81.984299149 podStartE2EDuration="1m21.984299149s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.949263594 +0000 UTC m=+109.833456310" watchObservedRunningTime="2026-02-02 10:40:29.984299149 +0000 UTC m=+109.868491865" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.991244 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.033774 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fptzv" podStartSLOduration=83.033755913 podStartE2EDuration="1m23.033755913s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.03296553 +0000 UTC m=+109.917158256" watchObservedRunningTime="2026-02-02 10:40:30.033755913 +0000 UTC m=+109.917948649" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.042604 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-thvm5" podStartSLOduration=83.042587483 podStartE2EDuration="1m23.042587483s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.042233253 +0000 UTC m=+109.926425969" watchObservedRunningTime="2026-02-02 10:40:30.042587483 +0000 UTC m=+109.926780199" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.051796 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=44.051778864 podStartE2EDuration="44.051778864s" podCreationTimestamp="2026-02-02 10:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.050888139 +0000 UTC m=+109.935080855" watchObservedRunningTime="2026-02-02 10:40:30.051778864 +0000 UTC m=+109.935971580" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.064302 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.064287569 podStartE2EDuration="1m29.064287569s" podCreationTimestamp="2026-02-02 10:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.063893088 +0000 UTC m=+109.948085804" watchObservedRunningTime="2026-02-02 10:40:30.064287569 +0000 UTC m=+109.948480285" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.094105 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.386235 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" event={"ID":"f69040ea-c39f-4efe-8d3a-8f1fc06d7652","Type":"ContainerStarted","Data":"c3d093a874f66d9dcad7c58c590cba8445470e229ae98667fe451e55e36c30de"} Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.820708 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:30 crc kubenswrapper[4782]: E0202 10:40:30.821692 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.821863 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:30 crc kubenswrapper[4782]: E0202 10:40:30.821928 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.822013 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:30 crc kubenswrapper[4782]: E0202 10:40:30.822132 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:31 crc kubenswrapper[4782]: I0202 10:40:31.395189 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" event={"ID":"f69040ea-c39f-4efe-8d3a-8f1fc06d7652","Type":"ContainerStarted","Data":"5f68cc8e5fb8e066a1514b75c86a87c935900172bef6bf00384642e7ea5e9a0e"} Feb 02 10:40:31 crc kubenswrapper[4782]: I0202 10:40:31.409414 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" podStartSLOduration=83.409399241 podStartE2EDuration="1m23.409399241s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:31.408742332 +0000 UTC m=+111.292935058" watchObservedRunningTime="2026-02-02 10:40:31.409399241 +0000 UTC m=+111.293591957" Feb 02 10:40:31 crc kubenswrapper[4782]: I0202 10:40:31.821025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:31 crc kubenswrapper[4782]: E0202 10:40:31.821141 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:32 crc kubenswrapper[4782]: I0202 10:40:32.820969 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:32 crc kubenswrapper[4782]: I0202 10:40:32.820979 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:32 crc kubenswrapper[4782]: I0202 10:40:32.820993 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:32 crc kubenswrapper[4782]: E0202 10:40:32.821166 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:32 crc kubenswrapper[4782]: E0202 10:40:32.821355 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:32 crc kubenswrapper[4782]: E0202 10:40:32.821464 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:33 crc kubenswrapper[4782]: I0202 10:40:33.828968 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:33 crc kubenswrapper[4782]: E0202 10:40:33.829183 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:34 crc kubenswrapper[4782]: I0202 10:40:34.821875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:34 crc kubenswrapper[4782]: I0202 10:40:34.821986 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:34 crc kubenswrapper[4782]: E0202 10:40:34.822082 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:34 crc kubenswrapper[4782]: E0202 10:40:34.822218 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:34 crc kubenswrapper[4782]: I0202 10:40:34.822017 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:34 crc kubenswrapper[4782]: E0202 10:40:34.822374 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:35 crc kubenswrapper[4782]: I0202 10:40:35.820522 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:35 crc kubenswrapper[4782]: E0202 10:40:35.820800 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:36 crc kubenswrapper[4782]: I0202 10:40:36.820858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:36 crc kubenswrapper[4782]: I0202 10:40:36.820925 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:36 crc kubenswrapper[4782]: I0202 10:40:36.821012 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:36 crc kubenswrapper[4782]: E0202 10:40:36.821132 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:36 crc kubenswrapper[4782]: E0202 10:40:36.821290 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:36 crc kubenswrapper[4782]: E0202 10:40:36.821370 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:37 crc kubenswrapper[4782]: I0202 10:40:37.820902 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:37 crc kubenswrapper[4782]: E0202 10:40:37.821096 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:38 crc kubenswrapper[4782]: I0202 10:40:38.821005 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:38 crc kubenswrapper[4782]: I0202 10:40:38.821095 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:38 crc kubenswrapper[4782]: E0202 10:40:38.821242 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:38 crc kubenswrapper[4782]: I0202 10:40:38.821025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:38 crc kubenswrapper[4782]: E0202 10:40:38.821375 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:38 crc kubenswrapper[4782]: E0202 10:40:38.821435 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:39 crc kubenswrapper[4782]: I0202 10:40:39.821168 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:39 crc kubenswrapper[4782]: E0202 10:40:39.822118 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.821083 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.821148 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.821693 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.822093 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.822211 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.822378 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.822498 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.822583 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.856605 4782 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.980264 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:41 crc kubenswrapper[4782]: I0202 10:40:41.845256 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:41 crc kubenswrapper[4782]: E0202 10:40:41.845473 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:42 crc kubenswrapper[4782]: I0202 10:40:42.820524 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:42 crc kubenswrapper[4782]: E0202 10:40:42.820781 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:42 crc kubenswrapper[4782]: I0202 10:40:42.820524 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:42 crc kubenswrapper[4782]: E0202 10:40:42.820910 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:42 crc kubenswrapper[4782]: I0202 10:40:42.820981 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:42 crc kubenswrapper[4782]: E0202 10:40:42.821072 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:43 crc kubenswrapper[4782]: I0202 10:40:43.821070 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:43 crc kubenswrapper[4782]: E0202 10:40:43.821287 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:44 crc kubenswrapper[4782]: I0202 10:40:44.820851 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:44 crc kubenswrapper[4782]: I0202 10:40:44.820909 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:44 crc kubenswrapper[4782]: I0202 10:40:44.820854 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:44 crc kubenswrapper[4782]: E0202 10:40:44.821098 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:44 crc kubenswrapper[4782]: E0202 10:40:44.821255 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:44 crc kubenswrapper[4782]: E0202 10:40:44.821328 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:45 crc kubenswrapper[4782]: I0202 10:40:45.820483 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:45 crc kubenswrapper[4782]: E0202 10:40:45.820750 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:45 crc kubenswrapper[4782]: E0202 10:40:45.981560 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:46 crc kubenswrapper[4782]: I0202 10:40:46.821003 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:46 crc kubenswrapper[4782]: I0202 10:40:46.821615 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:46 crc kubenswrapper[4782]: E0202 10:40:46.821674 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:46 crc kubenswrapper[4782]: E0202 10:40:46.821840 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:46 crc kubenswrapper[4782]: I0202 10:40:46.821141 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:46 crc kubenswrapper[4782]: E0202 10:40:46.821946 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.457920 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458620 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/0.log" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458691 4782 generic.go:334] "Generic (PLEG): container finished" podID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" exitCode=1 Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458738 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerDied","Data":"b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad"} Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458783 4782 scope.go:117] "RemoveContainer" containerID="9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.464426 4782 scope.go:117] "RemoveContainer" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" Feb 02 10:40:47 crc kubenswrapper[4782]: E0202 10:40:47.465441 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fsqgq_openshift-multus(04d9744a-e730-45b4-9f0c-bbb5b02cd311)\"" pod="openshift-multus/multus-fsqgq" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.820820 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:47 crc kubenswrapper[4782]: E0202 10:40:47.820994 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.464308 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.820954 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.820975 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.821110 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:48 crc kubenswrapper[4782]: E0202 10:40:48.821202 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:48 crc kubenswrapper[4782]: E0202 10:40:48.821308 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:48 crc kubenswrapper[4782]: E0202 10:40:48.821504 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:49 crc kubenswrapper[4782]: I0202 10:40:49.820620 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:49 crc kubenswrapper[4782]: E0202 10:40:49.821074 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:50 crc kubenswrapper[4782]: I0202 10:40:50.820198 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:50 crc kubenswrapper[4782]: I0202 10:40:50.820196 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:50 crc kubenswrapper[4782]: I0202 10:40:50.820265 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.821172 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.821279 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.821407 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.982333 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:51 crc kubenswrapper[4782]: I0202 10:40:51.820762 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:51 crc kubenswrapper[4782]: E0202 10:40:51.820963 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:52 crc kubenswrapper[4782]: I0202 10:40:52.821133 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:52 crc kubenswrapper[4782]: I0202 10:40:52.821231 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:52 crc kubenswrapper[4782]: E0202 10:40:52.821275 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:52 crc kubenswrapper[4782]: I0202 10:40:52.821154 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:52 crc kubenswrapper[4782]: E0202 10:40:52.821425 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:52 crc kubenswrapper[4782]: E0202 10:40:52.821512 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:53 crc kubenswrapper[4782]: I0202 10:40:53.820473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:53 crc kubenswrapper[4782]: E0202 10:40:53.820752 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:53 crc kubenswrapper[4782]: I0202 10:40:53.821892 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.492748 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.496203 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.496814 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.530237 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podStartSLOduration=106.530197389 podStartE2EDuration="1m46.530197389s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:54.527957704 +0000 UTC m=+134.412150440" watchObservedRunningTime="2026-02-02 10:40:54.530197389 +0000 UTC m=+134.414390145" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.724386 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tv4xc"] Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.724567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:54 crc kubenswrapper[4782]: E0202 10:40:54.724755 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.820797 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.820798 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:54 crc kubenswrapper[4782]: E0202 10:40:54.820947 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:54 crc kubenswrapper[4782]: E0202 10:40:54.821038 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:55 crc kubenswrapper[4782]: I0202 10:40:55.820604 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:55 crc kubenswrapper[4782]: I0202 10:40:55.820634 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:55 crc kubenswrapper[4782]: E0202 10:40:55.820794 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:55 crc kubenswrapper[4782]: E0202 10:40:55.820838 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:55 crc kubenswrapper[4782]: E0202 10:40:55.983399 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:56 crc kubenswrapper[4782]: I0202 10:40:56.820912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:56 crc kubenswrapper[4782]: I0202 10:40:56.820969 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:56 crc kubenswrapper[4782]: E0202 10:40:56.822018 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:56 crc kubenswrapper[4782]: E0202 10:40:56.822413 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:57 crc kubenswrapper[4782]: I0202 10:40:57.820454 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:57 crc kubenswrapper[4782]: I0202 10:40:57.820525 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:57 crc kubenswrapper[4782]: E0202 10:40:57.820712 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:57 crc kubenswrapper[4782]: E0202 10:40:57.820846 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:58 crc kubenswrapper[4782]: I0202 10:40:58.820410 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:58 crc kubenswrapper[4782]: I0202 10:40:58.820591 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:58 crc kubenswrapper[4782]: E0202 10:40:58.821977 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:58 crc kubenswrapper[4782]: E0202 10:40:58.822087 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:59 crc kubenswrapper[4782]: I0202 10:40:59.820611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:59 crc kubenswrapper[4782]: I0202 10:40:59.821269 4782 scope.go:117] "RemoveContainer" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" Feb 02 10:40:59 crc kubenswrapper[4782]: I0202 10:40:59.820611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:59 crc kubenswrapper[4782]: E0202 10:40:59.821539 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:59 crc kubenswrapper[4782]: E0202 10:40:59.821683 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.520885 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.520956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d"} Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.812858 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.821893 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.821958 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:00 crc kubenswrapper[4782]: E0202 10:41:00.822120 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:41:00 crc kubenswrapper[4782]: E0202 10:41:00.822274 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:41:00 crc kubenswrapper[4782]: E0202 10:41:00.985560 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:01 crc kubenswrapper[4782]: I0202 10:41:01.821001 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:01 crc kubenswrapper[4782]: E0202 10:41:01.821481 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:41:01 crc kubenswrapper[4782]: I0202 10:41:01.820995 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:01 crc kubenswrapper[4782]: E0202 10:41:01.821788 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:02 crc kubenswrapper[4782]: I0202 10:41:02.820731 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:02 crc kubenswrapper[4782]: I0202 10:41:02.820748 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:02 crc kubenswrapper[4782]: E0202 10:41:02.820967 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:41:02 crc kubenswrapper[4782]: E0202 10:41:02.821054 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:41:03 crc kubenswrapper[4782]: I0202 10:41:03.820567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:03 crc kubenswrapper[4782]: I0202 10:41:03.820566 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:03 crc kubenswrapper[4782]: E0202 10:41:03.821372 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:41:03 crc kubenswrapper[4782]: E0202 10:41:03.821573 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:04 crc kubenswrapper[4782]: I0202 10:41:04.821011 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:04 crc kubenswrapper[4782]: E0202 10:41:04.821299 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:41:04 crc kubenswrapper[4782]: I0202 10:41:04.821679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:04 crc kubenswrapper[4782]: E0202 10:41:04.821823 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:41:05 crc kubenswrapper[4782]: I0202 10:41:05.821018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:05 crc kubenswrapper[4782]: E0202 10:41:05.821353 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:05 crc kubenswrapper[4782]: I0202 10:41:05.821623 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:05 crc kubenswrapper[4782]: E0202 10:41:05.821939 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.821912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.821913 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.825720 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.826243 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.825978 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.826610 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.820765 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.820832 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.824623 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.824909 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.907954 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908126 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908174 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908217 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:08 crc kubenswrapper[4782]: E0202 10:41:08.908273 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:43:10.908216135 +0000 UTC m=+270.792408861 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908349 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.912724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.914698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.916489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.924912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.939626 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.944459 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.048554 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:09 crc kubenswrapper[4782]: W0202 10:41:09.193157 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3 WatchSource:0}: Error finding container b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3: Status 404 returned error can't find the container with id b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3 Feb 02 10:41:09 crc kubenswrapper[4782]: W0202 10:41:09.326386 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5 WatchSource:0}: Error finding container 7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5: Status 404 returned error can't find the container with id 7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5 Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.559018 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"51a938cf68dd53116ad972c650431693f3cb6ec5aa1ed0fa2a55dbba6adfbd93"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.559112 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.559354 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.560566 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"49c545f6456a1a5041e4d9a29bca5adff910f19a4469f4d698a26a0b626ec70b"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.560611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5aed476e808f92c6201ee00ddc26483607cecd066b34d514b049289b3e95254c"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.562028 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7898e7b0b477028cb13ab2465d17f8f7dbef9a14c976c8dfe4d7a9098082f242"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.562065 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5"} Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.553481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.609295 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.610420 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.610657 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.612179 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.612354 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.615011 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.615165 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.620493 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621189 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625492 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7v92z"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.635548 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.635845 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5br4b"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621191 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621276 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636199 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4b45h"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621382 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636432 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b98c7"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622105 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636583 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636753 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.637069 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.637163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622247 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622355 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622418 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622576 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.638418 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622803 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622974 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.623049 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625480 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625608 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625724 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625796 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.639039 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625860 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625944 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.628914 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.639434 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.639838 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.640006 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.640281 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.640850 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.642398 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.642725 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644917 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-config\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644943 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-config\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644960 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644995 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-config\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645026 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466b6b9-d1d5-471e-90e7-75f07078f8dc-serving-cert\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645040 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvws\" (UniqueName: \"kubernetes.io/projected/063dd8d0-356e-4c11-96fd-6ecee1f28da8-kube-api-access-6xvws\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645074 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645090 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lv6\" (UniqueName: \"kubernetes.io/projected/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-kube-api-access-84lv6\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645126 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645157 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645176 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfe41db-4509-43cc-a95c-9ac09e6c9390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645192 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfe41db-4509-43cc-a95c-9ac09e6c9390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645227 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5r4w\" (UniqueName: \"kubernetes.io/projected/5466b6b9-d1d5-471e-90e7-75f07078f8dc-kube-api-access-k5r4w\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645242 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645259 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645275 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645291 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-client\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-machine-approver-tls\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645337 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa5788-ab19-4e50-bc93-31b7a5069b32-serving-cert\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645369 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/acfa5788-ab19-4e50-bc93-31b7a5069b32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645385 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mr74\" (UniqueName: \"kubernetes.io/projected/e74c7e17-c70b-4637-ad47-58e1e192c52e-kube-api-access-5mr74\") pod \"downloads-7954f5f757-4b45h\" (UID: \"e74c7e17-c70b-4637-ad47-58e1e192c52e\") " pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645400 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645418 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbjg\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-kube-api-access-pdbjg\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-encryption-config\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645463 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-images\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645479 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8vc\" (UniqueName: \"kubernetes.io/projected/ccfe41db-4509-43cc-a95c-9ac09e6c9390-kube-api-access-lr8vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645496 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stzk\" (UniqueName: \"kubernetes.io/projected/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-kube-api-access-7stzk\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645529 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-serving-cert\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645547 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-dir\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645582 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6ww\" (UniqueName: \"kubernetes.io/projected/082079e0-8d5a-4d2e-959e-0366e4787bd5-kube-api-access-8n6ww\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645799 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-policies\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-auth-proxy-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645930 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645956 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645985 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646016 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646067 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646093 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063dd8d0-356e-4c11-96fd-6ecee1f28da8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646121 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8sc\" (UniqueName: \"kubernetes.io/projected/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-kube-api-access-fl8sc\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77znk\" (UniqueName: \"kubernetes.io/projected/acfa5788-ab19-4e50-bc93-31b7a5069b32-kube-api-access-77znk\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646200 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.651764 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z8gmg"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.653057 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.663185 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.664537 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.667373 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.667589 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.675167 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.675803 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.679396 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n294"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.695427 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.696380 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.696921 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.697211 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.704980 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.705388 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.715396 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.716088 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.733870 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.734495 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.734807 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-29qjf"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.735277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.736053 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.736351 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.738953 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.740557 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.740994 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.741302 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.741734 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.741916 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.742219 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.742536 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.742940 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjf8d"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743146 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743419 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743575 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743791 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743974 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743459 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744255 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744371 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744622 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744760 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744876 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744986 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.745474 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.745604 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.746887 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747188 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747308 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747430 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747550 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747709 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747846 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747972 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748093 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748206 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748517 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748718 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748839 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749013 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749157 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749269 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749388 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749579 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749947 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750124 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750292 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750492 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750560 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750286 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.751611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.751976 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-encryption-config\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752025 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-images\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752065 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752097 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hdn\" (UniqueName: \"kubernetes.io/projected/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-kube-api-access-w2hdn\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752148 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752204 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8vc\" (UniqueName: \"kubernetes.io/projected/ccfe41db-4509-43cc-a95c-9ac09e6c9390-kube-api-access-lr8vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752225 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752243 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1136cbad-9e47-49e0-a890-83d86d325537-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752279 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e5e11c7-6a7f-466b-8d59-674bb931db4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7stzk\" (UniqueName: \"kubernetes.io/projected/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-kube-api-access-7stzk\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752323 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit-dir\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752341 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8224e4c0-380b-489a-98d8-ee1b15c1637a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752357 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtfk5\" (UniqueName: \"kubernetes.io/projected/1f4f42b8-506a-4922-b7c4-7f77afbb238c-kube-api-access-xtfk5\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752380 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752407 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-serving-cert\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752433 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-dir\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752479 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752504 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752530 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-image-import-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752551 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-serving-cert\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n6ww\" (UniqueName: \"kubernetes.io/projected/082079e0-8d5a-4d2e-959e-0366e4787bd5-kube-api-access-8n6ww\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752603 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-policies\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-node-pullsecrets\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752676 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-auth-proxy-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752701 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752728 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752754 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5e11c7-6a7f-466b-8d59-674bb931db4c-config\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752775 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-client\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-encryption-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752814 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752831 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752853 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671fcd5f-c44a-46e7-840f-d204d2464822-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752874 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671fcd5f-c44a-46e7-840f-d204d2464822-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752907 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752929 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752947 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063dd8d0-356e-4c11-96fd-6ecee1f28da8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752967 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752989 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753011 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8sc\" (UniqueName: \"kubernetes.io/projected/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-kube-api-access-fl8sc\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-stats-auth\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753058 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77znk\" (UniqueName: \"kubernetes.io/projected/acfa5788-ab19-4e50-bc93-31b7a5069b32-kube-api-access-77znk\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753083 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753104 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5e11c7-6a7f-466b-8d59-674bb931db4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753124 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-srv-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753150 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tpw2\" (UniqueName: \"kubernetes.io/projected/86721216-38a9-4b44-8e34-d01a33c39e82-kube-api-access-9tpw2\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753177 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-config\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753198 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-config\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753246 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-config\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753290 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466b6b9-d1d5-471e-90e7-75f07078f8dc-serving-cert\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753357 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753382 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1136cbad-9e47-49e0-a890-83d86d325537-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753452 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvws\" (UniqueName: \"kubernetes.io/projected/063dd8d0-356e-4c11-96fd-6ecee1f28da8-kube-api-access-6xvws\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753472 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753492 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84lv6\" (UniqueName: \"kubernetes.io/projected/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-kube-api-access-84lv6\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753567 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-default-certificate\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753606 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfe41db-4509-43cc-a95c-9ac09e6c9390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753627 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753670 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfe41db-4509-43cc-a95c-9ac09e6c9390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753714 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671fcd5f-c44a-46e7-840f-d204d2464822-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753735 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753757 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753777 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5r4w\" (UniqueName: \"kubernetes.io/projected/5466b6b9-d1d5-471e-90e7-75f07078f8dc-kube-api-access-k5r4w\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753798 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753821 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-serving-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753845 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-metrics-certs\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753940 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753962 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753986 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754011 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-client\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754035 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-machine-approver-tls\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqdm\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-kube-api-access-gmqdm\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mr74\" (UniqueName: \"kubernetes.io/projected/e74c7e17-c70b-4637-ad47-58e1e192c52e-kube-api-access-5mr74\") pod \"downloads-7954f5f757-4b45h\" (UID: \"e74c7e17-c70b-4637-ad47-58e1e192c52e\") " pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754168 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754193 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa5788-ab19-4e50-bc93-31b7a5069b32-serving-cert\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754220 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/acfa5788-ab19-4e50-bc93-31b7a5069b32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754262 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754288 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dch5s\" (UniqueName: \"kubernetes.io/projected/8224e4c0-380b-489a-98d8-ee1b15c1637a-kube-api-access-dch5s\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754317 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754348 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754376 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbjg\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-kube-api-access-pdbjg\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754405 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754431 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-service-ca-bundle\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755064 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755337 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755497 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755694 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755888 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756045 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756228 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756403 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756543 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756686 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756958 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757086 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757118 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757323 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757339 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.758186 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-policies\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.758992 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-images\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.759262 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.760086 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.760577 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.762099 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.769664 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.770422 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.770927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.771161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-config\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.771268 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772127 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772450 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772756 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772910 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773213 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773398 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773485 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773819 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773997 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.777358 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.778261 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.778730 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466b6b9-d1d5-471e-90e7-75f07078f8dc-serving-cert\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.779818 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa5788-ab19-4e50-bc93-31b7a5069b32-serving-cert\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.780775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/acfa5788-ab19-4e50-bc93-31b7a5069b32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773629 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-auth-proxy-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.781954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-dir\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.782209 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.785258 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.782587 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.783288 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.783428 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfe41db-4509-43cc-a95c-9ac09e6c9390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.785866 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.782496 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.783669 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.793014 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-machine-approver-tls\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.793476 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063dd8d0-356e-4c11-96fd-6ecee1f28da8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.793971 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.795058 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.795682 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-config\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.796023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.801165 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.804697 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.804794 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.804899 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-encryption-config\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.805194 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-client\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.805366 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.806054 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.820167 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.820734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfe41db-4509-43cc-a95c-9ac09e6c9390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.849922 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.852205 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.860886 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-serving-cert\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.874101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.882866 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hdn\" (UniqueName: \"kubernetes.io/projected/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-kube-api-access-w2hdn\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.882930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.882953 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.883891 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.884549 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885173 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1136cbad-9e47-49e0-a890-83d86d325537-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885204 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885223 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885269 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e5e11c7-6a7f-466b-8d59-674bb931db4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit-dir\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8224e4c0-380b-489a-98d8-ee1b15c1637a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885364 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtfk5\" (UniqueName: \"kubernetes.io/projected/1f4f42b8-506a-4922-b7c4-7f77afbb238c-kube-api-access-xtfk5\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885393 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885455 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-image-import-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885475 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-serving-cert\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885511 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-node-pullsecrets\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885535 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c29fd\" (UniqueName: \"kubernetes.io/projected/ed24c96e-c389-443d-bdcf-b6fd727d472e-kube-api-access-c29fd\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886736 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit-dir\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886974 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-node-pullsecrets\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888269 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888477 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-image-import-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5e11c7-6a7f-466b-8d59-674bb931db4c-config\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888672 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-client\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-encryption-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.889374 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.889991 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.890849 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1136cbad-9e47-49e0-a890-83d86d325537-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.890856 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bhjwh"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.890984 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.891692 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84dzp"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892066 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892724 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892895 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.896954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.897188 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-serving-cert\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.898018 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.898169 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.900022 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.900350 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.903064 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.903504 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-client\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.903812 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.904280 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.904764 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.904926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-encryption-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905602 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671fcd5f-c44a-46e7-840f-d204d2464822-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905721 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671fcd5f-c44a-46e7-840f-d204d2464822-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905798 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.906874 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-srv-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.906955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-stats-auth\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907101 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5e11c7-6a7f-466b-8d59-674bb931db4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907132 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tpw2\" (UniqueName: \"kubernetes.io/projected/86721216-38a9-4b44-8e34-d01a33c39e82-kube-api-access-9tpw2\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907189 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed24c96e-c389-443d-bdcf-b6fd727d472e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907225 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed24c96e-c389-443d-bdcf-b6fd727d472e-proxy-tls\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907372 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907405 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1136cbad-9e47-49e0-a890-83d86d325537-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908250 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-default-certificate\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908368 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671fcd5f-c44a-46e7-840f-d204d2464822-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908393 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908451 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-metrics-certs\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908488 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908514 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-serving-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908540 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908593 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908626 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908676 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908703 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqdm\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-kube-api-access-gmqdm\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908738 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908799 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908835 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dch5s\" (UniqueName: \"kubernetes.io/projected/8224e4c0-380b-489a-98d8-ee1b15c1637a-kube-api-access-dch5s\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908917 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-service-ca-bundle\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.910015 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.910189 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.910789 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.912476 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.914586 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.916137 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.916348 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.916871 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.917303 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.917379 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.918041 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbq6z"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.918256 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.918991 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-serving-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.919554 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.919579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.925051 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1136cbad-9e47-49e0-a890-83d86d325537-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.926086 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.937398 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.938029 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.938373 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.939049 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.940272 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.941680 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qsnhv"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.942622 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.957400 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.963843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.964484 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.980984 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.981760 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.982723 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.984027 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.984135 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.985392 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.986261 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-default-certificate\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.988891 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.997231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.001600 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.002898 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.003039 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-config\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.003153 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-srv-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.003299 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.004924 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7v92z"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.009052 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010531 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlbc\" (UniqueName: \"kubernetes.io/projected/1e893e98-9670-49d0-8312-d78c86a14ba4-kube-api-access-qvlbc\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010760 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxr7\" (UniqueName: \"kubernetes.io/projected/4e9af173-4335-4ebd-9b11-dfb4180e968b-kube-api-access-zhxr7\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010854 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011052 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgqdc\" (UniqueName: \"kubernetes.io/projected/b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d-kube-api-access-hgqdc\") pod \"migrator-59844c95c7-bpzbh\" (UID: \"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011307 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c29fd\" (UniqueName: \"kubernetes.io/projected/ed24c96e-c389-443d-bdcf-b6fd727d472e-kube-api-access-c29fd\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011388 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011420 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxp9b\" (UniqueName: \"kubernetes.io/projected/9ee676ac-a60f-4855-949f-d3210f9314f5-kube-api-access-lxp9b\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed24c96e-c389-443d-bdcf-b6fd727d472e-proxy-tls\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012008 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed24c96e-c389-443d-bdcf-b6fd727d472e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012048 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-images\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012591 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdff8\" (UniqueName: \"kubernetes.io/projected/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-kube-api-access-bdff8\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013016 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ee676ac-a60f-4855-949f-d3210f9314f5-proxy-tls\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013151 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ml7\" (UniqueName: \"kubernetes.io/projected/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-kube-api-access-l2ml7\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.014245 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed24c96e-c389-443d-bdcf-b6fd727d472e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.014487 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.017721 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-metrics-certs\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.018850 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-stats-auth\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.026260 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.027180 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.027297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e9af173-4335-4ebd-9b11-dfb4180e968b-metrics-tls\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.027393 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.030723 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.031237 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.032767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b98c7"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.040211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.041233 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-service-ca-bundle\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.043714 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jvpsj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.048070 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5br4b"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.048521 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.053089 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.054301 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rqqwp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.055792 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.057404 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.058926 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjf8d"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.059833 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.062239 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.063859 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.068296 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.071810 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r7j2r"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.071880 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.073464 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.073611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.074947 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84dzp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.076804 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.079295 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.081667 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.082125 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.083286 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4b45h"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.084796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8224e4c0-380b-489a-98d8-ee1b15c1637a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.085003 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.085983 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n294"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.087355 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z8gmg"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.088584 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.089813 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rx8sj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.090987 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.091407 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.091576 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.093075 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.094255 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.095556 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qsnhv"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.096680 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.098457 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.099012 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r7j2r"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.100207 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.101028 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rqqwp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.102161 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bhjwh"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.103419 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rx8sj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.104922 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbq6z"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.106805 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.108175 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.110993 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129344 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129411 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-images\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdff8\" (UniqueName: \"kubernetes.io/projected/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-kube-api-access-bdff8\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129592 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ee676ac-a60f-4855-949f-d3210f9314f5-proxy-tls\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129709 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129741 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ml7\" (UniqueName: \"kubernetes.io/projected/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-kube-api-access-l2ml7\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129806 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129870 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e9af173-4335-4ebd-9b11-dfb4180e968b-metrics-tls\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130013 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvlbc\" (UniqueName: \"kubernetes.io/projected/1e893e98-9670-49d0-8312-d78c86a14ba4-kube-api-access-qvlbc\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130150 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxr7\" (UniqueName: \"kubernetes.io/projected/4e9af173-4335-4ebd-9b11-dfb4180e968b-kube-api-access-zhxr7\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130192 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgqdc\" (UniqueName: \"kubernetes.io/projected/b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d-kube-api-access-hgqdc\") pod \"migrator-59844c95c7-bpzbh\" (UID: \"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130356 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxp9b\" (UniqueName: \"kubernetes.io/projected/9ee676ac-a60f-4855-949f-d3210f9314f5-kube-api-access-lxp9b\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.131074 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.132420 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.142615 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e5e11c7-6a7f-466b-8d59-674bb931db4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.151616 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.158926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.159460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5e11c7-6a7f-466b-8d59-674bb931db4c-config\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.170612 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.191167 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.197377 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671fcd5f-c44a-46e7-840f-d204d2464822-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.210255 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.230954 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.244582 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671fcd5f-c44a-46e7-840f-d204d2464822-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.250732 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.272950 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.291027 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.310772 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.330963 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.343900 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.351330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.406270 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8vc\" (UniqueName: \"kubernetes.io/projected/ccfe41db-4509-43cc-a95c-9ac09e6c9390-kube-api-access-lr8vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.426087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7stzk\" (UniqueName: \"kubernetes.io/projected/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-kube-api-access-7stzk\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.445579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5r4w\" (UniqueName: \"kubernetes.io/projected/5466b6b9-d1d5-471e-90e7-75f07078f8dc-kube-api-access-k5r4w\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.451798 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.470591 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.480869 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-images\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.490330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.511187 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.526421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.535715 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.542332 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.551039 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.570912 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.584990 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.591524 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.603391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ee676ac-a60f-4855-949f-d3210f9314f5-proxy-tls\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.627325 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.632524 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.638473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.652841 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mr74\" (UniqueName: \"kubernetes.io/projected/e74c7e17-c70b-4637-ad47-58e1e192c52e-kube-api-access-5mr74\") pod \"downloads-7954f5f757-4b45h\" (UID: \"e74c7e17-c70b-4637-ad47-58e1e192c52e\") " pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.665337 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.708544 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84lv6\" (UniqueName: \"kubernetes.io/projected/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-kube-api-access-84lv6\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.712928 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.729388 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvws\" (UniqueName: \"kubernetes.io/projected/063dd8d0-356e-4c11-96fd-6ecee1f28da8-kube-api-access-6xvws\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.765081 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.776918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbjg\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-kube-api-access-pdbjg\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.778562 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.789574 4782 request.go:700] Waited for 1.007236279s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.797411 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.800985 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.813107 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed24c96e-c389-443d-bdcf-b6fd727d472e-proxy-tls\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.827008 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.836446 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n6ww\" (UniqueName: \"kubernetes.io/projected/082079e0-8d5a-4d2e-959e-0366e4787bd5-kube-api-access-8n6ww\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.854152 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77znk\" (UniqueName: \"kubernetes.io/projected/acfa5788-ab19-4e50-bc93-31b7a5069b32-kube-api-access-77znk\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.863878 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.865447 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8sc\" (UniqueName: \"kubernetes.io/projected/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-kube-api-access-fl8sc\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.896531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.896928 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hdn\" (UniqueName: \"kubernetes.io/projected/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-kube-api-access-w2hdn\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.921327 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.922464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.922468 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.924398 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.931330 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtfk5\" (UniqueName: \"kubernetes.io/projected/1f4f42b8-506a-4922-b7c4-7f77afbb238c-kube-api-access-xtfk5\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.947993 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.952670 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.952818 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.956785 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.969907 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.972370 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.981933 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.990956 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.991730 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.005481 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.021175 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.033255 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.036288 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.065434 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.072851 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.107422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b98c7"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.109433 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.117201 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e9af173-4335-4ebd-9b11-dfb4180e968b-metrics-tls\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.129872 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130098 4782 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130178 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert podName:1e893e98-9670-49d0-8312-d78c86a14ba4 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.63014722 +0000 UTC m=+152.514339936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert") pod "service-ca-operator-777779d784-vtkbj" (UID: "1e893e98-9670-49d0-8312-d78c86a14ba4") : failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130302 4782 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130391 4782 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130431 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key podName:6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630386327 +0000 UTC m=+152.514579043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key") pod "service-ca-9c57cc56f-lbq6z" (UID: "6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a") : failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130454 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle podName:6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630445179 +0000 UTC m=+152.514637895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle") pod "service-ca-9c57cc56f-lbq6z" (UID: "6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130518 4782 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130555 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config podName:1e893e98-9670-49d0-8312-d78c86a14ba4 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630549502 +0000 UTC m=+152.514742218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config") pod "service-ca-operator-777779d784-vtkbj" (UID: "1e893e98-9670-49d0-8312-d78c86a14ba4") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130606 4782 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130635 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume podName:9832aa65-d498-4a21-b53a-ebc591328a00 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630623804 +0000 UTC m=+152.514816520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume") pod "collect-profiles-29500470-wxc6r" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.131198 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.153807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.159074 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.171325 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.205067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.227722 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.265753 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671fcd5f-c44a-46e7-840f-d204d2464822-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.267114 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5br4b"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.291889 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5e11c7-6a7f-466b-8d59-674bb931db4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.320385 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tpw2\" (UniqueName: \"kubernetes.io/projected/86721216-38a9-4b44-8e34-d01a33c39e82-kube-api-access-9tpw2\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.332071 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.338124 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqdm\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-kube-api-access-gmqdm\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.339991 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.343127 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:12 crc kubenswrapper[4782]: W0202 10:41:12.350633 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063dd8d0_356e_4c11_96fd_6ecee1f28da8.slice/crio-c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c WatchSource:0}: Error finding container c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c: Status 404 returned error can't find the container with id c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.354134 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.371042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.371756 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.379391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dch5s\" (UniqueName: \"kubernetes.io/projected/8224e4c0-380b-489a-98d8-ee1b15c1637a-kube-api-access-dch5s\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.379951 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.393756 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.411268 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:41:12 crc kubenswrapper[4782]: W0202 10:41:12.414134 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc962b97_f5d3_4673_9a39_8fbf6bc2424f.slice/crio-2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14 WatchSource:0}: Error finding container 2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14: Status 404 returned error can't find the container with id 2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14 Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.439069 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.451727 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.473315 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.491005 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.511115 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.534273 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.554589 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.572081 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.594041 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.618367 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.626181 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.631712 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.652179 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.652531 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" event={"ID":"ccfe41db-4509-43cc-a95c-9ac09e6c9390","Type":"ContainerStarted","Data":"23597bd2ae135850271080eb84a45c5c170651bf1de07614c9fcc3d265e2d209"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.654424 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4b45h"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.661773 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" event={"ID":"5466b6b9-d1d5-471e-90e7-75f07078f8dc","Type":"ContainerStarted","Data":"7ef745798ffd0832e9cf5479f64c3f88c407496b0b9deabd7245fee7b0e25744"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.662114 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.664957 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665021 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665088 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665130 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.669894 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.673237 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.681378 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.692224 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.697949 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" event={"ID":"872ffab3-f760-45e2-a5c8-aa1055f9ab2d","Type":"ContainerStarted","Data":"6271911877c5d27fdf7f39db98cb85a87486aa11c3c3dda949bcb89bb1ceccf1"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.702384 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.705320 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" event={"ID":"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce","Type":"ContainerStarted","Data":"734dc99486d2fe1cd2e38327b0aede28b14f3088279e4d36af48a315424ebef0"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.713249 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.716132 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-29qjf" event={"ID":"fc962b97-f5d3-4673-9a39-8fbf6bc2424f","Type":"ContainerStarted","Data":"2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.726892 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" event={"ID":"063dd8d0-356e-4c11-96fd-6ecee1f28da8","Type":"ContainerStarted","Data":"c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.732077 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.758803 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.771751 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.791452 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.811768 4782 request.go:700] Waited for 1.824582114s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&limit=500&resourceVersion=0 Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.816242 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.831303 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.843192 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.853796 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.858963 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7v92z"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.862006 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.895727 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: W0202 10:41:12.909218 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacfa5788_ab19_4e50_bc93_31b7a5069b32.slice/crio-0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b WatchSource:0}: Error finding container 0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b: Status 404 returned error can't find the container with id 0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.923460 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.940396 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.943987 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c29fd\" (UniqueName: \"kubernetes.io/projected/ed24c96e-c389-443d-bdcf-b6fd727d472e-kube-api-access-c29fd\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.953449 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.971933 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.972263 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.993245 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.012119 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.021517 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.032259 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.035449 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.058914 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.094558 4782 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.094961 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.113980 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.130362 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.136211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.157398 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.167322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.169516 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.180012 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z8gmg"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.180313 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.214686 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:41:13 crc kubenswrapper[4782]: W0202 10:41:13.218721 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f11a2b3_15a4_4358_8604_bf4e6a0d22fe.slice/crio-687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f WatchSource:0}: Error finding container 687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f: Status 404 returned error can't find the container with id 687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.219613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.244458 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdff8\" (UniqueName: \"kubernetes.io/projected/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-kube-api-access-bdff8\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.256038 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ml7\" (UniqueName: \"kubernetes.io/projected/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-kube-api-access-l2ml7\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.278242 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvlbc\" (UniqueName: \"kubernetes.io/projected/1e893e98-9670-49d0-8312-d78c86a14ba4-kube-api-access-qvlbc\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.296292 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxr7\" (UniqueName: \"kubernetes.io/projected/4e9af173-4335-4ebd-9b11-dfb4180e968b-kube-api-access-zhxr7\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.300767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.306795 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.313290 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.334006 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgqdc\" (UniqueName: \"kubernetes.io/projected/b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d-kube-api-access-hgqdc\") pod \"migrator-59844c95c7-bpzbh\" (UID: \"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.337726 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.349115 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxp9b\" (UniqueName: \"kubernetes.io/projected/9ee676ac-a60f-4855-949f-d3210f9314f5-kube-api-access-lxp9b\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.361126 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.385376 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.385531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.388706 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.405329 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408082 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mgs\" (UniqueName: \"kubernetes.io/projected/04bfeb66-d53c-4263-a149-e7e1d705f9d1-kube-api-access-f2mgs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4471bb99-24c2-45b0-bb05-3f3d59191e12-serving-cert\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408180 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-profile-collector-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408222 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408251 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408298 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408317 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-serving-cert\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408389 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-config\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408481 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408518 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5w5\" (UniqueName: \"kubernetes.io/projected/e457712f-8cc5-4167-b074-cd8713eb9989-kube-api-access-qp5w5\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408585 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408687 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-trusted-ca\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408714 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408771 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408806 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-service-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408838 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxmhg\" (UniqueName: \"kubernetes.io/projected/1180fe74-10a3-4aa0-b205-7f47597ef9b3-kube-api-access-nxmhg\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1180fe74-10a3-4aa0-b205-7f47597ef9b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408930 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-apiservice-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409042 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zpv\" (UniqueName: \"kubernetes.io/projected/40828191-5926-42ba-b84d-5737181b97e5-kube-api-access-w4zpv\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409104 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-etcd-client\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdtvs\" (UniqueName: \"kubernetes.io/projected/4471bb99-24c2-45b0-bb05-3f3d59191e12-kube-api-access-hdtvs\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk59v\" (UniqueName: \"kubernetes.io/projected/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-kube-api-access-kk59v\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409211 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-config\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409292 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-webhook-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409308 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-srv-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409377 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04bfeb66-d53c-4263-a149-e7e1d705f9d1-tmpfs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.414970 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:13.914319223 +0000 UTC m=+153.798511939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.442235 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.490304 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511250 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511587 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-registration-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511683 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-serving-cert\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511786 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wtfq\" (UniqueName: \"kubernetes.io/projected/db132fa2-cd84-4b44-b523-48b1af9f6f73-kube-api-access-6wtfq\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511815 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-config\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511905 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp5w5\" (UniqueName: \"kubernetes.io/projected/e457712f-8cc5-4167-b074-cd8713eb9989-kube-api-access-qp5w5\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511947 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511975 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-certs\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512044 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-csi-data-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512083 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-trusted-ca\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512107 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512205 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-service-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfxf\" (UniqueName: \"kubernetes.io/projected/e3487bc1-e5e6-4f19-9d79-86176f5b9689-kube-api-access-4kfxf\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd7961-c147-4fad-b4b7-75f5567976f2-config-volume\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxmhg\" (UniqueName: \"kubernetes.io/projected/1180fe74-10a3-4aa0-b205-7f47597ef9b3-kube-api-access-nxmhg\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512331 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54pr\" (UniqueName: \"kubernetes.io/projected/d11d8c73-fe90-48c3-be77-b066aa57cacc-kube-api-access-v54pr\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512375 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1180fe74-10a3-4aa0-b205-7f47597ef9b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512408 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-apiservice-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3487bc1-e5e6-4f19-9d79-86176f5b9689-cert\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512520 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512548 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zpv\" (UniqueName: \"kubernetes.io/projected/40828191-5926-42ba-b84d-5737181b97e5-kube-api-access-w4zpv\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512576 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-node-bootstrap-token\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512602 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-etcd-client\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512628 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdtvs\" (UniqueName: \"kubernetes.io/projected/4471bb99-24c2-45b0-bb05-3f3d59191e12-kube-api-access-hdtvs\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512670 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9fd7961-c147-4fad-b4b7-75f5567976f2-metrics-tls\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512697 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk59v\" (UniqueName: \"kubernetes.io/projected/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-kube-api-access-kk59v\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512737 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-config\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512757 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-plugins-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-webhook-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512880 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512997 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-mountpoint-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-srv-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513068 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-socket-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513094 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04bfeb66-d53c-4263-a149-e7e1d705f9d1-tmpfs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mgs\" (UniqueName: \"kubernetes.io/projected/04bfeb66-d53c-4263-a149-e7e1d705f9d1-kube-api-access-f2mgs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513184 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8gn4\" (UniqueName: \"kubernetes.io/projected/d9fd7961-c147-4fad-b4b7-75f5567976f2-kube-api-access-v8gn4\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513211 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4471bb99-24c2-45b0-bb05-3f3d59191e12-serving-cert\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.515316 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.518800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04bfeb66-d53c-4263-a149-e7e1d705f9d1-tmpfs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.519405 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.520478 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.520566 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-profile-collector-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.520602 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.521165 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-trusted-ca\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.521338 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-service-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.524918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-config\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.535410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-serving-cert\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.540745 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-srv-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.541512 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.541712 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-config\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.541774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.542122 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-etcd-client\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.542724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.547279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4471bb99-24c2-45b0-bb05-3f3d59191e12-serving-cert\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.550771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.551001 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-webhook-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.551450 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.051371229 +0000 UTC m=+153.935563945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.551675 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-apiservice-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.557944 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-profile-collector-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.567351 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.569337 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n294"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.569910 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1180fe74-10a3-4aa0-b205-7f47597ef9b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.592536 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zpv\" (UniqueName: \"kubernetes.io/projected/40828191-5926-42ba-b84d-5737181b97e5-kube-api-access-w4zpv\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.592616 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.593081 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.601444 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk59v\" (UniqueName: \"kubernetes.io/projected/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-kube-api-access-kk59v\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.603004 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjf8d"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.613208 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621825 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-registration-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wtfq\" (UniqueName: \"kubernetes.io/projected/db132fa2-cd84-4b44-b523-48b1af9f6f73-kube-api-access-6wtfq\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621973 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-certs\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621998 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-csi-data-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfxf\" (UniqueName: \"kubernetes.io/projected/e3487bc1-e5e6-4f19-9d79-86176f5b9689-kube-api-access-4kfxf\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622069 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd7961-c147-4fad-b4b7-75f5567976f2-config-volume\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622098 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54pr\" (UniqueName: \"kubernetes.io/projected/d11d8c73-fe90-48c3-be77-b066aa57cacc-kube-api-access-v54pr\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3487bc1-e5e6-4f19-9d79-86176f5b9689-cert\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622141 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-node-bootstrap-token\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9fd7961-c147-4fad-b4b7-75f5567976f2-metrics-tls\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622178 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-plugins-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622200 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-mountpoint-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-socket-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8gn4\" (UniqueName: \"kubernetes.io/projected/d9fd7961-c147-4fad-b4b7-75f5567976f2-kube-api-access-v8gn4\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622746 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-registration-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.623389 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-csi-data-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.623425 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.123406259 +0000 UTC m=+154.007598975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.623862 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-mountpoint-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.623867 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-plugins-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.624062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-socket-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.624214 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mgs\" (UniqueName: \"kubernetes.io/projected/04bfeb66-d53c-4263-a149-e7e1d705f9d1-kube-api-access-f2mgs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.624919 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd7961-c147-4fad-b4b7-75f5567976f2-config-volume\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.628142 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.630670 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-node-bootstrap-token\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.631086 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9fd7961-c147-4fad-b4b7-75f5567976f2-metrics-tls\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.633282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3487bc1-e5e6-4f19-9d79-86176f5b9689-cert\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.633811 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-certs\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.640721 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.651750 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp5w5\" (UniqueName: \"kubernetes.io/projected/e457712f-8cc5-4167-b074-cd8713eb9989-kube-api-access-qp5w5\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.673432 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.690983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxmhg\" (UniqueName: \"kubernetes.io/projected/1180fe74-10a3-4aa0-b205-7f47597ef9b3-kube-api-access-nxmhg\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.698892 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.709901 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdtvs\" (UniqueName: \"kubernetes.io/projected/4471bb99-24c2-45b0-bb05-3f3d59191e12-kube-api-access-hdtvs\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.715197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.725763 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.726308 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8gn4\" (UniqueName: \"kubernetes.io/projected/d9fd7961-c147-4fad-b4b7-75f5567976f2-kube-api-access-v8gn4\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.726530 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.726623 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.226582708 +0000 UTC m=+154.110775424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.740372 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" event={"ID":"8224e4c0-380b-489a-98d8-ee1b15c1637a","Type":"ContainerStarted","Data":"d5567bbd4eeb150c519d43cab06c78ecb484659f0ed4fc3c5811bd160745dafc"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.746307 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" event={"ID":"cc1f149e-4ec4-423a-b94e-bf0923a75bdf","Type":"ContainerStarted","Data":"1fb96a86c69b5b50f9cceae102b5c65fadf412c9575f5bc2d83c29442bb3c286"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.747248 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" event={"ID":"86721216-38a9-4b44-8e34-d01a33c39e82","Type":"ContainerStarted","Data":"3628b137d2d83efa460ef5539b666645db067ea4425c2e7efb22b4f0ec56fcc5"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.752141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54pr\" (UniqueName: \"kubernetes.io/projected/d11d8c73-fe90-48c3-be77-b066aa57cacc-kube-api-access-v54pr\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.755036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" event={"ID":"5466b6b9-d1d5-471e-90e7-75f07078f8dc","Type":"ContainerStarted","Data":"b87f2c5487c1eeb777a70064742d269f6bd2ef8e07f38b420c016c73d76393c8"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.758333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" event={"ID":"082079e0-8d5a-4d2e-959e-0366e4787bd5","Type":"ContainerStarted","Data":"b9464c36501b143c6fdc51ced7bea02d503e0b02a5314a8a2e3ef93e0c79e5a1"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.759950 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerStarted","Data":"43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.759988 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerStarted","Data":"1bed6a14af1c27e28bfaf20957b4ab6debdecb60fbd87716abbb4a3205ddb87a"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.763678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" event={"ID":"063dd8d0-356e-4c11-96fd-6ecee1f28da8","Type":"ContainerStarted","Data":"817dc238f91cf819a8009c1a6eff870c565f9e77876230a572f763d46f20197e"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.763742 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" event={"ID":"063dd8d0-356e-4c11-96fd-6ecee1f28da8","Type":"ContainerStarted","Data":"daa0bdca97ae3b6c22432214d3a6fff56d4ae3e6ae6e05e0805afbd87d88c210"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.768316 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wtfq\" (UniqueName: \"kubernetes.io/projected/db132fa2-cd84-4b44-b523-48b1af9f6f73-kube-api-access-6wtfq\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.769060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" event={"ID":"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe","Type":"ContainerStarted","Data":"687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.773297 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.785901 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" event={"ID":"872ffab3-f760-45e2-a5c8-aa1055f9ab2d","Type":"ContainerStarted","Data":"d72307d69f602e1ff26c8bbc8c21c16eb7edeb1b416c53a98c63744c491fce60"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.786951 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfxf\" (UniqueName: \"kubernetes.io/projected/e3487bc1-e5e6-4f19-9d79-86176f5b9689-kube-api-access-4kfxf\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.793859 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerStarted","Data":"e1d3c6b879e919d9a8eeb6fc928bb73b1f6f10789e409d2d56a047e2a54eac9e"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.809512 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.817790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerStarted","Data":"10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.817852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerStarted","Data":"866c9e4ea060d3590180ce1cec057596aa8bcbf4e01b778a768a521ce79fceb7"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.818562 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.820031 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerStarted","Data":"6cb750d8f6e27611724e0ea1f017a2846617e985d57d9fe4f1da057656e28495"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.820929 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.820987 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.830580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.832287 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" event={"ID":"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce","Type":"ContainerStarted","Data":"86f8a20613675160f704acae6ec62fb56aeac8a22285b87dc22bd713e9f4fd9c"} Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.832716 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.332697021 +0000 UTC m=+154.216889737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.835041 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" event={"ID":"ccfe41db-4509-43cc-a95c-9ac09e6c9390","Type":"ContainerStarted","Data":"abd0c83cc403dcb1d0f98f3306260bba6570eb49eb377b7c92717f14fd267e4f"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.837441 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" event={"ID":"ed24c96e-c389-443d-bdcf-b6fd727d472e","Type":"ContainerStarted","Data":"e6bcb3c3a4317261a64251a90479a74e7b78a949462b94a6651e31463ec332dc"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.860196 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" event={"ID":"3e5362ac-062f-4bf2-a0dc-e96b2750ab52","Type":"ContainerStarted","Data":"ee9de80d00973c840fc5b964ea74298a58d60b1e718bafe4377156baab611d49"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.864848 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerStarted","Data":"3ff1f99d47a76aef7148a44cb594fe9fccb90137af935d28016f31e0538f0f1c"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.886984 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" event={"ID":"1e5e11c7-6a7f-466b-8d59-674bb931db4c","Type":"ContainerStarted","Data":"28e6524c0e66d885bbe200b6dfb3f02338992324346cef2b6fa2bd6754e0330c"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.897675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" event={"ID":"671fcd5f-c44a-46e7-840f-d204d2464822","Type":"ContainerStarted","Data":"a1a4a81ae79b8b2d14e735f0023db45c9b575a45ef2a77f622fb384d724b48b9"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.932271 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.933973 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.433933574 +0000 UTC m=+154.318126330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.953055 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.953923 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.977527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerStarted","Data":"a512fcceae6cfaeaad197794b5b6c708f15cf79898b3102381c39333768e348a"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.000060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerStarted","Data":"921233a570c55bc883c3564dff261aca39ab0ae30aac10af39e867fb0a53b2da"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.000128 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerStarted","Data":"0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.006310 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-29qjf" event={"ID":"fc962b97-f5d3-4673-9a39-8fbf6bc2424f","Type":"ContainerStarted","Data":"7d308460ab001244221375cbec47c4b889163b2f3295f1c1f6a02f8a9fbe57eb"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.016279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" event={"ID":"1136cbad-9e47-49e0-a890-83d86d325537","Type":"ContainerStarted","Data":"049e08b1c9c5f8aae7d828e291ec4f9d2fdce967f94545a1fa74bd6467d9dd95"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.033873 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.034333 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.534316792 +0000 UTC m=+154.418509508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.036897 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.039881 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.039983 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.045105 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.051062 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.074962 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bhjwh"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.135563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.135804 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.63575989 +0000 UTC m=+154.519952616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.135883 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.138358 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.638325654 +0000 UTC m=+154.522518540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.155767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.207584 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" podStartSLOduration=126.207544503 podStartE2EDuration="2m6.207544503s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:14.20572041 +0000 UTC m=+154.089913126" watchObservedRunningTime="2026-02-02 10:41:14.207544503 +0000 UTC m=+154.091737219" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.237606 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.238169 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.738141196 +0000 UTC m=+154.622333912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.238504 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.296451 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbq6z"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.307376 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.313772 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.354955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.355456 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.855435212 +0000 UTC m=+154.739627928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.370523 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" podStartSLOduration=126.370497557 podStartE2EDuration="2m6.370497557s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:14.36921909 +0000 UTC m=+154.253411806" watchObservedRunningTime="2026-02-02 10:41:14.370497557 +0000 UTC m=+154.254690273" Feb 02 10:41:14 crc kubenswrapper[4782]: W0202 10:41:14.427121 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e893e98_9670_49d0_8312_d78c86a14ba4.slice/crio-809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a WatchSource:0}: Error finding container 809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a: Status 404 returned error can't find the container with id 809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.456278 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.456476 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.956442848 +0000 UTC m=+154.840635574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.456587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.457099 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.957079197 +0000 UTC m=+154.841271913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.504322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.558564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.559405 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.05936102 +0000 UTC m=+154.943553736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: W0202 10:41:14.609247 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdb9068_c8eb_4a1d_b4ab_c3f2ed70e4c1.slice/crio-7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a WatchSource:0}: Error finding container 7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a: Status 404 returned error can't find the container with id 7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.660423 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.660915 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.160892561 +0000 UTC m=+155.045085277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: W0202 10:41:14.702101 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb925d6b9_8b5c_4407_bd7b_9ddcbc62d78d.slice/crio-3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89 WatchSource:0}: Error finding container 3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89: Status 404 returned error can't find the container with id 3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89 Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.761194 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.761743 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.261720041 +0000 UTC m=+155.145912757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.862780 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.863971 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.363953693 +0000 UTC m=+155.248146409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.967518 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.968090 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.468067719 +0000 UTC m=+155.352260435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.968700 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.969217 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.469198221 +0000 UTC m=+155.353390937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.997460 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" podStartSLOduration=127.997412856 podStartE2EDuration="2m7.997412856s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:14.95634353 +0000 UTC m=+154.840536246" watchObservedRunningTime="2026-02-02 10:41:14.997412856 +0000 UTC m=+154.881605572" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.021108 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4b45h" podStartSLOduration=127.021065689 podStartE2EDuration="2m7.021065689s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.018838114 +0000 UTC m=+154.903030830" watchObservedRunningTime="2026-02-02 10:41:15.021065689 +0000 UTC m=+154.905258405" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.042700 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.043064 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.062565 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" podStartSLOduration=128.062532046 podStartE2EDuration="2m8.062532046s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.054319219 +0000 UTC m=+154.938511935" watchObservedRunningTime="2026-02-02 10:41:15.062532046 +0000 UTC m=+154.946724762" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.073097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.080964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" event={"ID":"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a","Type":"ContainerStarted","Data":"a8320cde34b625168e4692cde9b3454dccaeabc829174b47d64085d2f5b45873"} Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.101300 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.601243893 +0000 UTC m=+155.485436609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.128908 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" event={"ID":"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d","Type":"ContainerStarted","Data":"3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.131062 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerStarted","Data":"01e32062d069a57210dfb3c4675630b56cd608a941ddd39a5505e8107646b05b"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.151337 4782 generic.go:334] "Generic (PLEG): container finished" podID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerID="921233a570c55bc883c3564dff261aca39ab0ae30aac10af39e867fb0a53b2da" exitCode=0 Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.151487 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerDied","Data":"921233a570c55bc883c3564dff261aca39ab0ae30aac10af39e867fb0a53b2da"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.160056 4782 generic.go:334] "Generic (PLEG): container finished" podID="082079e0-8d5a-4d2e-959e-0366e4787bd5" containerID="ae210efe35521adeb523a0ad70993b1caf4264aaadaf47536448cef5635fce63" exitCode=0 Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.160162 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" event={"ID":"082079e0-8d5a-4d2e-959e-0366e4787bd5","Type":"ContainerDied","Data":"ae210efe35521adeb523a0ad70993b1caf4264aaadaf47536448cef5635fce63"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.162672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" event={"ID":"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe","Type":"ContainerStarted","Data":"0c77fbef295a9ce71e979d858238a9e4d89ec3bbb50cfdba6e4a858df0d23e23"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.207107 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" event={"ID":"4e9af173-4335-4ebd-9b11-dfb4180e968b","Type":"ContainerStarted","Data":"0b432bd836bc2fca5309bd46ebb2c520232f7e3ae9b4aba53e4b7c48b69410ea"} Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.229234 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.729194587 +0000 UTC m=+155.613387293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.229707 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.252416 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84dzp"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.331418 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.331870 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.831842521 +0000 UTC m=+155.716035227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.332085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.336326 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.836299909 +0000 UTC m=+155.720492625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.349263 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" event={"ID":"cc1f149e-4ec4-423a-b94e-bf0923a75bdf","Type":"ContainerStarted","Data":"675acde6c4f65e65a92a61e36467a7b8e1bcf1c0911ce0ce962b3a6a63fd17e5"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.419538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerStarted","Data":"366cbdbf60f3a0bf6cbbfb63bc5e1d0a80ef38264552b527daf3339d1fbe1798"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.432945 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.435218 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.935185564 +0000 UTC m=+155.819378280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.436485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.446083 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-29qjf" podStartSLOduration=127.446023747 podStartE2EDuration="2m7.446023747s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.428076219 +0000 UTC m=+155.312268935" watchObservedRunningTime="2026-02-02 10:41:15.446023747 +0000 UTC m=+155.330216463" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.449006 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.553472 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.554917 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qsnhv"] Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.555404 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.055381434 +0000 UTC m=+155.939574230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.564245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" event={"ID":"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1","Type":"ContainerStarted","Data":"7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.580895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" event={"ID":"1e893e98-9670-49d0-8312-d78c86a14ba4","Type":"ContainerStarted","Data":"809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.581594 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.590216 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.659010 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.660849 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" podStartSLOduration=127.660826628 podStartE2EDuration="2m7.660826628s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.646781913 +0000 UTC m=+155.530974629" watchObservedRunningTime="2026-02-02 10:41:15.660826628 +0000 UTC m=+155.545019344" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.661601 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.16158329 +0000 UTC m=+156.045776006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: W0202 10:41:15.674935 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1180fe74_10a3_4aa0_b205_7f47597ef9b3.slice/crio-609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15 WatchSource:0}: Error finding container 609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15: Status 404 returned error can't find the container with id 609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15 Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.695959 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" podStartSLOduration=127.695931712 podStartE2EDuration="2m7.695931712s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.694396237 +0000 UTC m=+155.578588953" watchObservedRunningTime="2026-02-02 10:41:15.695931712 +0000 UTC m=+155.580124418" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.760251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" podStartSLOduration=127.760224568 podStartE2EDuration="2m7.760224568s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.756618384 +0000 UTC m=+155.640811100" watchObservedRunningTime="2026-02-02 10:41:15.760224568 +0000 UTC m=+155.644417284" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.763749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.764163 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.26414532 +0000 UTC m=+156.148338046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.796578 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r7j2r"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.861485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rx8sj"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.865392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.865878 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.365856256 +0000 UTC m=+156.250048972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.882522 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podStartSLOduration=127.882485636 podStartE2EDuration="2m7.882485636s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.85213801 +0000 UTC m=+155.736330726" watchObservedRunningTime="2026-02-02 10:41:15.882485636 +0000 UTC m=+155.766678352" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.887511 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.920193 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.967839 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.968305 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.468289314 +0000 UTC m=+156.352482030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.071386 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.071981 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.571957136 +0000 UTC m=+156.456149852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.188131 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:16 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:16 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:16 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.188611 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.209148 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.209759 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.709737974 +0000 UTC m=+156.593930690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.314367 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.314760 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.814716125 +0000 UTC m=+156.698908831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.319206 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.319691 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.819670788 +0000 UTC m=+156.703863504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.341315 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rqqwp"] Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.432360 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.432795 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.932770903 +0000 UTC m=+156.816963619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.432915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.433385 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.93337554 +0000 UTC m=+156.817568256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.518102 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg"] Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.539273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.540007 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.039968568 +0000 UTC m=+156.924161284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.624825 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" event={"ID":"40828191-5926-42ba-b84d-5737181b97e5","Type":"ContainerStarted","Data":"6714b6b7956b6309c079a11da7e25c5269bbe368016ee25c48bf02265e924da4"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.643722 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.644247 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.144229978 +0000 UTC m=+157.028422694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.653181 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerStarted","Data":"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.694357 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sf9m8" podStartSLOduration=128.694324564 podStartE2EDuration="2m8.694324564s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:16.693951053 +0000 UTC m=+156.578143769" watchObservedRunningTime="2026-02-02 10:41:16.694324564 +0000 UTC m=+156.578517280" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.732461 4782 generic.go:334] "Generic (PLEG): container finished" podID="1f4f42b8-506a-4922-b7c4-7f77afbb238c" containerID="34e7f8d57e8c9375f8c9398e6d4cd370ed725dc5ed5f6555c8a512aa9090cea5" exitCode=0 Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.732824 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerDied","Data":"34e7f8d57e8c9375f8c9398e6d4cd370ed725dc5ed5f6555c8a512aa9090cea5"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.751443 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.754743 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.254700137 +0000 UTC m=+157.138892853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.755924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.758000 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.257983552 +0000 UTC m=+157.142176348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.809775 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" event={"ID":"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce","Type":"ContainerStarted","Data":"2e679f2a123f3ea3c808d877619f195af22f10478050a25694f4c7bd93e60016"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.855304 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jvpsj" event={"ID":"d11d8c73-fe90-48c3-be77-b066aa57cacc","Type":"ContainerStarted","Data":"2ace9e1bf16756d2ae740c0dadc00576d3529dd94d54e5892dd754bf57288489"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.855792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" event={"ID":"9ee676ac-a60f-4855-949f-d3210f9314f5","Type":"ContainerStarted","Data":"aa584d7d950044cd4ffbd586e9a45da0355e4894887e2a530ac6cc37db172038"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.855806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" event={"ID":"1180fe74-10a3-4aa0-b205-7f47597ef9b3","Type":"ContainerStarted","Data":"609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.861088 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.864366 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.364332012 +0000 UTC m=+157.248524728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.913393 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" event={"ID":"1e5e11c7-6a7f-466b-8d59-674bb931db4c","Type":"ContainerStarted","Data":"4c5f44d3fab84c230a91f9e50e8922318b2da22446b29707de66df8a793b9b0f"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.939567 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" event={"ID":"86721216-38a9-4b44-8e34-d01a33c39e82","Type":"ContainerStarted","Data":"c784413eb79fdf358b73df71b8841a880b3a4fbb7d789d856ce2403a59d79fc9"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.940268 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.942880 4782 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pvghb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.942919 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" podUID="86721216-38a9-4b44-8e34-d01a33c39e82" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.950784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" event={"ID":"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a","Type":"ContainerStarted","Data":"3c0cd563542e52112f85015a41ab2b892d7ecd47f9d6571f9e5bdc03673d82e2"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.965923 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.966307 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.466267575 +0000 UTC m=+157.350460471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.977218 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rx8sj" event={"ID":"d9fd7961-c147-4fad-b4b7-75f5567976f2","Type":"ContainerStarted","Data":"d0b1d1249ae95523d0bb7cd985db2b5c7d0b50a7db66d727ea69904fee5552df"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.979821 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" podStartSLOduration=129.979795455 podStartE2EDuration="2m9.979795455s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:16.854027914 +0000 UTC m=+156.738220630" watchObservedRunningTime="2026-02-02 10:41:16.979795455 +0000 UTC m=+156.863988171" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.980462 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" podStartSLOduration=128.980454934 podStartE2EDuration="2m8.980454934s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:16.977776287 +0000 UTC m=+156.861969023" watchObservedRunningTime="2026-02-02 10:41:16.980454934 +0000 UTC m=+156.864647670" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:16.999913 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerStarted","Data":"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.000604 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.002287 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsb8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.002327 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.035453 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" event={"ID":"3e5362ac-062f-4bf2-a0dc-e96b2750ab52","Type":"ContainerStarted","Data":"b04ac8253f1360623e90bc90b7f2cc920ce7d2b40dbfa6d465169a37dedfa6dd"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.037728 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerStarted","Data":"df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.045038 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.048208 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:17 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:17 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:17 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.048279 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.060429 4782 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sc7kt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.061064 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.072742 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.074152 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.574131909 +0000 UTC m=+157.458324625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.090069 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" podStartSLOduration=129.090030388 podStartE2EDuration="2m9.090030388s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.070746971 +0000 UTC m=+156.954939707" watchObservedRunningTime="2026-02-02 10:41:17.090030388 +0000 UTC m=+156.974223104" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.090497 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" podStartSLOduration=129.090490021 podStartE2EDuration="2m9.090490021s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.027880073 +0000 UTC m=+156.912072799" watchObservedRunningTime="2026-02-02 10:41:17.090490021 +0000 UTC m=+156.974682737" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.136999 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" podStartSLOduration=130.136977993 podStartE2EDuration="2m10.136977993s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.135936573 +0000 UTC m=+157.020129289" watchObservedRunningTime="2026-02-02 10:41:17.136977993 +0000 UTC m=+157.021170709" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.176686 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.176981 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.676969228 +0000 UTC m=+157.561161944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.185228 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" event={"ID":"1e893e98-9670-49d0-8312-d78c86a14ba4","Type":"ContainerStarted","Data":"19c52582d9d6503c8512b7942d993ecb6474039013418ab9733501cd405c2f51"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.186582 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podStartSLOduration=129.186560835 podStartE2EDuration="2m9.186560835s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.186482302 +0000 UTC m=+157.070675018" watchObservedRunningTime="2026-02-02 10:41:17.186560835 +0000 UTC m=+157.070753551" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.258347 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" podStartSLOduration=129.258305946 podStartE2EDuration="2m9.258305946s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.234404136 +0000 UTC m=+157.118596862" watchObservedRunningTime="2026-02-02 10:41:17.258305946 +0000 UTC m=+157.142498662" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.260665 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" event={"ID":"ed24c96e-c389-443d-bdcf-b6fd727d472e","Type":"ContainerStarted","Data":"e037096537fd49b553528b65b8f5d6cec7fd232ba7d814579263eb45d04c42d7"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.286266 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.299687 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.799622588 +0000 UTC m=+157.683815304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.322576 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.324249 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.824233929 +0000 UTC m=+157.708426645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.340089 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" podStartSLOduration=129.340061036 podStartE2EDuration="2m9.340061036s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.339610143 +0000 UTC m=+157.223802859" watchObservedRunningTime="2026-02-02 10:41:17.340061036 +0000 UTC m=+157.224253752" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.406868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerStarted","Data":"b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.409385 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" podStartSLOduration=129.409349516 podStartE2EDuration="2m9.409349516s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.397149214 +0000 UTC m=+157.281341930" watchObservedRunningTime="2026-02-02 10:41:17.409349516 +0000 UTC m=+157.293542232" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.422404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" event={"ID":"04bfeb66-d53c-4263-a149-e7e1d705f9d1","Type":"ContainerStarted","Data":"d379e1009a1a0d9e7dc27efc54baf6b5fb55c2e2555ae9793f6b41017abec525"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.424312 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.425867 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.925810311 +0000 UTC m=+157.810003027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.430353 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerStarted","Data":"6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.436300 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.450140 4782 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l2hps container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.450207 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.489330 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" event={"ID":"4471bb99-24c2-45b0-bb05-3f3d59191e12","Type":"ContainerStarted","Data":"cedf0ef9e9da630d9d7462798a41206f29c965ecd9b398094d354ec99515d10c"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.490370 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.491790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" event={"ID":"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d","Type":"ContainerStarted","Data":"144bdd87fcd4e43e08dc8e8720f7a4a651248039101cecfd432b655bd220270b"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.492563 4782 patch_prober.go:28] interesting pod/console-operator-58897d9998-qsnhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.492604 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podUID="4471bb99-24c2-45b0-bb05-3f3d59191e12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.513302 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"f645a791a3450700734f03dc17d37ec76d3e1d1ce6dd2fc86ac5a97507c430f7"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.529764 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.531514 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.031487122 +0000 UTC m=+157.915680018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.542830 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" event={"ID":"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d","Type":"ContainerStarted","Data":"1ff197f1bc588e155c33611ca81f8fb60a2d7209994e44077453eba80258b1aa"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.624985 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podStartSLOduration=129.624961941 podStartE2EDuration="2m9.624961941s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.623006254 +0000 UTC m=+157.507198970" watchObservedRunningTime="2026-02-02 10:41:17.624961941 +0000 UTC m=+157.509154657" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.625109 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" podStartSLOduration=130.625104875 podStartE2EDuration="2m10.625104875s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.452447731 +0000 UTC m=+157.336640447" watchObservedRunningTime="2026-02-02 10:41:17.625104875 +0000 UTC m=+157.509297591" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.629472 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" event={"ID":"1136cbad-9e47-49e0-a890-83d86d325537","Type":"ContainerStarted","Data":"88dd5902d563a82f23232676ad168c9e7e041f718f29d548107f4e61045ad4cb"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.631566 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.632769 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.132737975 +0000 UTC m=+158.016930691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.652928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" event={"ID":"671fcd5f-c44a-46e7-840f-d204d2464822","Type":"ContainerStarted","Data":"eb8054c8ebd3591d23679ac11a66f5399c030827cda5bc451b67c3a140494277"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.654862 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.654906 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.733362 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.740208 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.240174387 +0000 UTC m=+158.124367313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.835891 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.836358 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.335972513 +0000 UTC m=+158.220165239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.836399 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.836815 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.336802387 +0000 UTC m=+158.220995103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.839698 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podStartSLOduration=129.83968368 podStartE2EDuration="2m9.83968368s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.747505629 +0000 UTC m=+157.631698345" watchObservedRunningTime="2026-02-02 10:41:17.83968368 +0000 UTC m=+157.723876416" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.941561 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.942136 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.442088626 +0000 UTC m=+158.326281342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.043696 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.044441 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.544423501 +0000 UTC m=+158.428616207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.046954 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:18 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:18 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:18 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.047025 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.150337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.150869 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.650840563 +0000 UTC m=+158.535033279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.254900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.256936 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.756910155 +0000 UTC m=+158.641102871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.360733 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.361114 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.861075382 +0000 UTC m=+158.745268098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.472788 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.473190 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.973175308 +0000 UTC m=+158.857368024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.574970 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.575412 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.075394079 +0000 UTC m=+158.959586795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.676680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.676731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" event={"ID":"1136cbad-9e47-49e0-a890-83d86d325537","Type":"ContainerStarted","Data":"5d93a4ee7ab87f66d43ca414cad70a3abab495e56d5fa4d07a3ec4fd51e19e67"} Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.677173 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.177148707 +0000 UTC m=+159.061341433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.689244 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" event={"ID":"ed24c96e-c389-443d-bdcf-b6fd727d472e","Type":"ContainerStarted","Data":"16d1533fa9fd564637c00c7ac26601ec856b29a69837c170a48643b421b53856"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.704675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerStarted","Data":"72bcac9cb7b150a69f6ceca776a72ef29a03eba9f5c1f9ad15dc601084d035bd"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.704772 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.711962 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" event={"ID":"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d","Type":"ContainerStarted","Data":"faa62695a6a822d2ecef0a8249556c0694c69b5ad78c8384c335b7a0aa887cd6"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.723256 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" event={"ID":"e457712f-8cc5-4167-b074-cd8713eb9989","Type":"ContainerStarted","Data":"93e1b5344dcec97dbfdf479458be0e7b8e37078141a051288144b56f50e25139"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.723299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" event={"ID":"e457712f-8cc5-4167-b074-cd8713eb9989","Type":"ContainerStarted","Data":"37c6be04ae0eb86f31e008f743a9c19454846b04d7d8c6f787d7766afb8fcaf4"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.724237 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.724939 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.724990 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.726961 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" event={"ID":"082079e0-8d5a-4d2e-959e-0366e4787bd5","Type":"ContainerStarted","Data":"fc20bf99c5542b1e870fd90d6d5b8c58c5907e341414a69e3a33457bb31007a9"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.727088 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" podStartSLOduration=130.727071228 podStartE2EDuration="2m10.727071228s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.841288116 +0000 UTC m=+157.725480852" watchObservedRunningTime="2026-02-02 10:41:18.727071228 +0000 UTC m=+158.611263944" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.727592 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podStartSLOduration=130.727587543 podStartE2EDuration="2m10.727587543s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.725447161 +0000 UTC m=+158.609639877" watchObservedRunningTime="2026-02-02 10:41:18.727587543 +0000 UTC m=+158.611780259" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.738369 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rqqwp" event={"ID":"e3487bc1-e5e6-4f19-9d79-86176f5b9689","Type":"ContainerStarted","Data":"dde926fdab276b28b2a5e4d83a8d09ceaf9ff32a232c715b88e7b0b563a33a9c"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.738467 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rqqwp" event={"ID":"e3487bc1-e5e6-4f19-9d79-86176f5b9689","Type":"ContainerStarted","Data":"47d434ef590522ad750aa45a3290d2712e9b65f0e14b67777d7efb0981de5a59"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.745277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" event={"ID":"40828191-5926-42ba-b84d-5737181b97e5","Type":"ContainerStarted","Data":"0274879557cb2fa846e7d39a3d6e1c0b31c73237c40e22c24552cf5b1376b0fe"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.754624 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jvpsj" event={"ID":"d11d8c73-fe90-48c3-be77-b066aa57cacc","Type":"ContainerStarted","Data":"3aaa428559bc9e8e4b518d2006412cd18d56f880137224c819a072c94bda832c"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.761043 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podStartSLOduration=130.761023238 podStartE2EDuration="2m10.761023238s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.755689504 +0000 UTC m=+158.639882220" watchObservedRunningTime="2026-02-02 10:41:18.761023238 +0000 UTC m=+158.645215954" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.772220 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" event={"ID":"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d","Type":"ContainerStarted","Data":"30352607cc42d152f7e595932bca688b1d98d76ed3cb4bceea083527bca6d270"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.776347 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" event={"ID":"9ee676ac-a60f-4855-949f-d3210f9314f5","Type":"ContainerStarted","Data":"a413a2212255ee67ee1968cf2c2f15fda2e4bb4fb57b18592790dbb3eb7eb9b3"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.776413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" event={"ID":"9ee676ac-a60f-4855-949f-d3210f9314f5","Type":"ContainerStarted","Data":"197b793a725593dac94b13df26077c78c845077c65c1cf972dd5fa7577219469"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.780289 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.782252 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.282117747 +0000 UTC m=+159.166310463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.784686 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" event={"ID":"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1","Type":"ContainerStarted","Data":"d8325544121c869765ab1664f09dc48dc5b720b7e1de88a2f461978b91312e60"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.796628 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" podStartSLOduration=130.796595485 podStartE2EDuration="2m10.796595485s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.793400593 +0000 UTC m=+158.677593319" watchObservedRunningTime="2026-02-02 10:41:18.796595485 +0000 UTC m=+158.680788201" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.815655 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" event={"ID":"4471bb99-24c2-45b0-bb05-3f3d59191e12","Type":"ContainerStarted","Data":"717ba9dced6c0626b628bed28b544692c3c8ebe7a4660947c967f142d44346f4"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.817951 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" event={"ID":"1180fe74-10a3-4aa0-b205-7f47597ef9b3","Type":"ContainerStarted","Data":"7a757a5e4e54824731e2cc739189983d506534b385165c14a4258138827b6176"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.818086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" event={"ID":"1180fe74-10a3-4aa0-b205-7f47597ef9b3","Type":"ContainerStarted","Data":"999e687bc64e6cceb4540cd1247a0ae430a157441090e8bc6fc5467cf64e6ceb"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.819319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" event={"ID":"8224e4c0-380b-489a-98d8-ee1b15c1637a","Type":"ContainerStarted","Data":"c92e263b64370e3efc5e3743dc938a35c1906b7ddba58a86238a664180d22a79"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.821708 4782 patch_prober.go:28] interesting pod/console-operator-58897d9998-qsnhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.821781 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podUID="4471bb99-24c2-45b0-bb05-3f3d59191e12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.852448 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rqqwp" podStartSLOduration=8.852421747 podStartE2EDuration="8.852421747s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.82723533 +0000 UTC m=+158.711428056" watchObservedRunningTime="2026-02-02 10:41:18.852421747 +0000 UTC m=+158.736614463" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.887321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.900539 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.400523266 +0000 UTC m=+159.284716172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.918458 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" event={"ID":"3e5362ac-062f-4bf2-a0dc-e96b2750ab52","Type":"ContainerStarted","Data":"8bbfbbad540ae0b9faad8dbb2a178e82cb455cd8ae165b3b2177d148f02280e9"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.936341 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" event={"ID":"04bfeb66-d53c-4263-a149-e7e1d705f9d1","Type":"ContainerStarted","Data":"fa08662019279128f468ce78aa4b169bc4ad199a2b7aee35946f57f8e967b2e8"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.937864 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.939499 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6x4zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.939546 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podUID="04bfeb66-d53c-4263-a149-e7e1d705f9d1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.942394 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jvpsj" podStartSLOduration=8.942379934 podStartE2EDuration="8.942379934s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.899073554 +0000 UTC m=+158.783266280" watchObservedRunningTime="2026-02-02 10:41:18.942379934 +0000 UTC m=+158.826572650" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.976726 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rx8sj" event={"ID":"d9fd7961-c147-4fad-b4b7-75f5567976f2","Type":"ContainerStarted","Data":"c3b7c73b300d3a3c282ba3dc07a834af4aaad64a851ecf6f9f47686d2eadb747"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.992702 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.994245 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.494219391 +0000 UTC m=+159.378412107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.000030 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" podStartSLOduration=131.000013288 podStartE2EDuration="2m11.000013288s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.997009721 +0000 UTC m=+158.881202447" watchObservedRunningTime="2026-02-02 10:41:19.000013288 +0000 UTC m=+158.884206014" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.000532 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" podStartSLOduration=131.000527103 podStartE2EDuration="2m11.000527103s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.944480075 +0000 UTC m=+158.828672791" watchObservedRunningTime="2026-02-02 10:41:19.000527103 +0000 UTC m=+158.884719819" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.007659 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" event={"ID":"4e9af173-4335-4ebd-9b11-dfb4180e968b","Type":"ContainerStarted","Data":"5f54dc86b98848ff22639b3fbcc26d55e544fe0cf2fe684543185333f500edfa"} Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.019059 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsb8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.019124 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.020102 4782 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l2hps container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.020179 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.046913 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:19 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:19 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:19 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.046970 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.060986 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" podStartSLOduration=131.060968088 podStartE2EDuration="2m11.060968088s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:19.058758224 +0000 UTC m=+158.942950940" watchObservedRunningTime="2026-02-02 10:41:19.060968088 +0000 UTC m=+158.945160804" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.073255 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.097260 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.098215 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podStartSLOduration=131.098194902 podStartE2EDuration="2m11.098194902s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:19.098021487 +0000 UTC m=+158.982214203" watchObservedRunningTime="2026-02-02 10:41:19.098194902 +0000 UTC m=+158.982387618" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.100289 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.600259352 +0000 UTC m=+159.484452238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.211410 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.211731 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.711710449 +0000 UTC m=+159.595903165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.212094 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.212463 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.712455091 +0000 UTC m=+159.596647807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.313542 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.313776 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.813740485 +0000 UTC m=+159.697933201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.313911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.314331 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.814314602 +0000 UTC m=+159.698507318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.414981 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.415303 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.915256305 +0000 UTC m=+159.799449021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.415365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.415937 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.915921204 +0000 UTC m=+159.800114110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.516791 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.517060 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.017024883 +0000 UTC m=+159.901217609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.517481 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.517969 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.0179571 +0000 UTC m=+159.902149996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.619761 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.620015 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.119970245 +0000 UTC m=+160.004162961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.620075 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.620486 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.120478799 +0000 UTC m=+160.004671515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.703696 4782 csr.go:261] certificate signing request csr-pg99q is approved, waiting to be issued Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.714742 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.721454 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.721717 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.221669761 +0000 UTC m=+160.105862487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.721807 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.722177 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.222154985 +0000 UTC m=+160.106347871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.725984 4782 csr.go:257] certificate signing request csr-pg99q is issued Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.823882 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.825939 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.32590973 +0000 UTC m=+160.210102466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.927798 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.928302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.428279925 +0000 UTC m=+160.312472821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.017836 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" event={"ID":"8224e4c0-380b-489a-98d8-ee1b15c1637a","Type":"ContainerStarted","Data":"78049b8c2c467db296fbf23b6b6764eb61edf61b04ad5643f450ed222e761cf9"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.019963 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rx8sj" event={"ID":"d9fd7961-c147-4fad-b4b7-75f5567976f2","Type":"ContainerStarted","Data":"7def0eee5994841bc5341fcd71852bcf40d32d7bcc0a8468a7f80ed43167a81f"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.020320 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.023147 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"f552f5c209be1b2c3ad1e343b59cb7168fc39643cd80e3cfeda1a23d3929d75d"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.026359 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerStarted","Data":"091c99a165d85dd17c961c1351977ee8ed537641cc70661f69345a5c56b09859"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.026550 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerStarted","Data":"e7de695cdb44af327ebdcca6a7412363211e305bdea091b28ca57124a2c2fa76"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.028731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" event={"ID":"4e9af173-4335-4ebd-9b11-dfb4180e968b","Type":"ContainerStarted","Data":"1bf7f946ec28a00eaf539ed76a6e2fddc7e533db23eccf088377ec0830372b07"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.028938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029495 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029550 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029613 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6x4zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029630 4782 patch_prober.go:28] interesting pod/console-operator-58897d9998-qsnhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029688 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podUID="04bfeb66-d53c-4263-a149-e7e1d705f9d1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029694 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podUID="4471bb99-24c2-45b0-bb05-3f3d59191e12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.031395 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.531367822 +0000 UTC m=+160.415560538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.031813 4782 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7v92z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.031863 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podUID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.033873 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsb8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.033922 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.062957 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" podStartSLOduration=132.062925563 podStartE2EDuration="2m12.062925563s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.060962976 +0000 UTC m=+159.945155692" watchObservedRunningTime="2026-02-02 10:41:20.062925563 +0000 UTC m=+159.947118279" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.063794 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:20 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:20 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:20 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.064271 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.104970 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.131330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.141834 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.64181683 +0000 UTC m=+160.526009546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.151789 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" podStartSLOduration=132.151761757 podStartE2EDuration="2m12.151761757s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.132045328 +0000 UTC m=+160.016238044" watchObservedRunningTime="2026-02-02 10:41:20.151761757 +0000 UTC m=+160.035954473" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.242543 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.242953 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.742934169 +0000 UTC m=+160.627126885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.344801 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.345827 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.845787239 +0000 UTC m=+160.729979955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.347972 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" podStartSLOduration=133.347959991 podStartE2EDuration="2m13.347959991s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.227064821 +0000 UTC m=+160.111257537" watchObservedRunningTime="2026-02-02 10:41:20.347959991 +0000 UTC m=+160.232152707" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.348135 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rx8sj" podStartSLOduration=10.348131466 podStartE2EDuration="10.348131466s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.345603963 +0000 UTC m=+160.229796689" watchObservedRunningTime="2026-02-02 10:41:20.348131466 +0000 UTC m=+160.232324182" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.448991 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.948942897 +0000 UTC m=+160.833135613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.448831 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.450719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.452757 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.952724436 +0000 UTC m=+160.836917152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.525924 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" podStartSLOduration=132.525891148 podStartE2EDuration="2m12.525891148s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.416302024 +0000 UTC m=+160.300494740" watchObservedRunningTime="2026-02-02 10:41:20.525891148 +0000 UTC m=+160.410083884" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.528215 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" podStartSLOduration=132.528196495 podStartE2EDuration="2m12.528196495s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.520381839 +0000 UTC m=+160.404574565" watchObservedRunningTime="2026-02-02 10:41:20.528196495 +0000 UTC m=+160.412389211" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.551924 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.552160 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.052123285 +0000 UTC m=+160.936316001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.552845 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.553451 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.053426723 +0000 UTC m=+160.937619439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.654137 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.654412 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.154367167 +0000 UTC m=+161.038559893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.654788 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.655127 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.155110879 +0000 UTC m=+161.039303595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.727846 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 10:36:19 +0000 UTC, rotation deadline is 2026-12-09 07:15:07.159858635 +0000 UTC Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.727908 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7436h33m46.431956005s for next certificate rotation Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.731449 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" podStartSLOduration=132.731419442 podStartE2EDuration="2m12.731419442s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.60077973 +0000 UTC m=+160.484972456" watchObservedRunningTime="2026-02-02 10:41:20.731419442 +0000 UTC m=+160.615612158" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.756392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.756846 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.256822415 +0000 UTC m=+161.141015131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.858336 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.858868 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.35884467 +0000 UTC m=+161.243037386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.885161 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.886694 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.951428 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.951877 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.956907 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.961089 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.961375 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.461331629 +0000 UTC m=+161.345524345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.961501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.961917 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.461906066 +0000 UTC m=+161.346098962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.057183 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:21 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:21 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:21 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.057259 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.062837 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.063325 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.063396 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.077092 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.563619532 +0000 UTC m=+161.447812248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.099149 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.168377 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.168564 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.168702 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.170518 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.670497388 +0000 UTC m=+161.554690104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.170601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.271582 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.272101 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.77207707 +0000 UTC m=+161.656269786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.272392 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.272794 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.772786161 +0000 UTC m=+161.656978867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.302563 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.373657 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.374015 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.873970372 +0000 UTC m=+161.758163108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.476332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.476785 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.976764989 +0000 UTC m=+161.860957705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.509691 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.577368 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.578275 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.077556429 +0000 UTC m=+161.961749155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.578744 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.579144 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.079127544 +0000 UTC m=+161.963320250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.680210 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.680381 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.180357677 +0000 UTC m=+162.064550393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.680472 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.680798 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.18079139 +0000 UTC m=+162.064984106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.781368 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.781849 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.281823466 +0000 UTC m=+162.166016182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830322 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830749 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830345 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830876 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.864608 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.864683 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.882855 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.883316 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.383299776 +0000 UTC m=+162.267492492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.923813 4782 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7v92z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.923934 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podUID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.934092 4782 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7v92z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.934196 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podUID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.935516 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.968944 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.970392 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.970632 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.977408 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.977476 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.992440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.992596 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.49256529 +0000 UTC m=+162.376757996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.992939 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.993002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.993028 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.993396 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.493389004 +0000 UTC m=+162.377581720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.000800 4782 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z8gmg container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.28:8443/livez\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.000879 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" podUID="1f4f42b8-506a-4922-b7c4-7f77afbb238c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.28:8443/livez\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.038783 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.069276 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:22 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:22 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:22 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.069360 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.075966 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6x4zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.076038 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podUID="04bfeb66-d53c-4263-a149-e7e1d705f9d1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.105934 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.107610 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.607579211 +0000 UTC m=+162.491771927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.208682 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.209045 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.709028709 +0000 UTC m=+162.593221425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.309247 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.309470 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.809434358 +0000 UTC m=+162.693627074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.309661 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.310184 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.81017707 +0000 UTC m=+162.694369786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.414316 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.414830 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.91479215 +0000 UTC m=+162.798984866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.414889 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.415331 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.915323545 +0000 UTC m=+162.799516261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.516413 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.516549 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.016523407 +0000 UTC m=+162.900716123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.516687 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.517041 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.017031451 +0000 UTC m=+162.901224167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.534335 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.544702 4782 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-fwkht container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]log ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]etcd ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]etcd-readiness ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 10:41:22 crc kubenswrapper[4782]: [-]informer-sync failed: reason withheld Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/max-in-flight-filter ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]shutdown ok Feb 02 10:41:22 crc kubenswrapper[4782]: readyz check failed Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.544807 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" podUID="082079e0-8d5a-4d2e-959e-0366e4787bd5" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.617478 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.617720 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.117684557 +0000 UTC m=+163.001877273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.623474 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.725662 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.726000 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.225983564 +0000 UTC m=+163.110176280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.827560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.857536 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.357502191 +0000 UTC m=+163.241694907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.946477 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.946958 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.446941992 +0000 UTC m=+163.331134708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.953258 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.953459 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.040278 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:23 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:23 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:23 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.040352 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.047488 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.048013 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.547988179 +0000 UTC m=+163.432180895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.097449 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6","Type":"ContainerStarted","Data":"2034a31847b2f294c83ea2d9717b4e214f2b19b44188b148a7af71f3915c9fe9"} Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.099937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"6db49e8223f961f04f23da7614a5c5219befb4b782386050030a003079a4d672"} Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.149099 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.149728 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.649701155 +0000 UTC m=+163.533894051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.250554 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.250719 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.75067887 +0000 UTC m=+163.634871586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.251235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.251591 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.751580786 +0000 UTC m=+163.635773502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.265924 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.267098 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.271254 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.284984 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.285739 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.307308 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.308107 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.308387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.308553 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.315992 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.321459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.323867 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.344433 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352518 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.352819 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.852779598 +0000 UTC m=+163.736972314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352912 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352933 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352949 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352977 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353072 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353114 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353135 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.353490 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.853480958 +0000 UTC m=+163.737673674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455409 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455765 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455832 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455863 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455940 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455960 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.456161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.456302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.956278086 +0000 UTC m=+163.840470802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.456588 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.457044 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.457294 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.457435 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.506920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.509361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.534714 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.536355 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.536890 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.558319 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.558886 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.058867018 +0000 UTC m=+163.943059734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.584461 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.599055 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.602275 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.622489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.633059 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661489 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661802 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661893 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661941 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.670002 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.169967945 +0000 UTC m=+164.054160661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765543 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765619 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765674 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765696 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.766243 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.766581 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.266566304 +0000 UTC m=+164.150759020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.767020 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.768240 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.769392 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.859585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.871203 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.871616 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.871864 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.371805812 +0000 UTC m=+164.255998528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.872137 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.872219 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.884439 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.887984 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.955175 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991188 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991253 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.992424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.992493 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.492471846 +0000 UTC m=+164.376664562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.992693 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.031659 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.044568 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:24 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:24 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:24 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.044664 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.103410 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.105174 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.605140258 +0000 UTC m=+164.489332974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.105452 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.157920 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.170153 4782 generic.go:334] "Generic (PLEG): container finished" podID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerID="1f19df8bd992a46faa225a2bdde8f980f9614cec37080c584059715e758ebedc" exitCode=0 Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.170611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6","Type":"ContainerDied","Data":"1f19df8bd992a46faa225a2bdde8f980f9614cec37080c584059715e758ebedc"} Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.190273 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.207269 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.707252356 +0000 UTC m=+164.591445072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.206854 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.280526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"55d706b8c149a6aaf4f21f54720b6f815043c22d3407d81647de5953a7874b26"} Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.315337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.316785 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.816760598 +0000 UTC m=+164.700953314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.419399 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.420083 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.92005516 +0000 UTC m=+164.804247876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.476133 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.522665 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.523116 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.023091214 +0000 UTC m=+164.907283930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.624032 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.625349 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.125329416 +0000 UTC m=+165.009522132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.725679 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.726201 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.226179207 +0000 UTC m=+165.110371923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.828345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.828891 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.328875352 +0000 UTC m=+165.213068068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.930707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.931230 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.431202726 +0000 UTC m=+165.315395442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.032819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.033347 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.533324965 +0000 UTC m=+165.417517871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.137059 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.137465 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.63744187 +0000 UTC m=+165.521634586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.181120 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:25 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:25 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:25 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.181204 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.240719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.241235 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.741217876 +0000 UTC m=+165.625410592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.310447 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"444754b36df118935d60858b524788de7ef6fb8c250d8f4cb68416ab108df6c4"} Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.343209 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.343561 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.84354178 +0000 UTC m=+165.727734496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.381032 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.382396 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.439776 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444628 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444758 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444810 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.445242 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.945226686 +0000 UTC m=+165.829419572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.524501 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552536 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552813 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.552986 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.052961776 +0000 UTC m=+165.937154482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.553478 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.554785 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.558107 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.561200 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:41:25 crc kubenswrapper[4782]: W0202 10:41:25.584921 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10039944_73fc_417b_925f_48a2985c277d.slice/crio-1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5 WatchSource:0}: Error finding container 1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5: Status 404 returned error can't find the container with id 1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5 Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.621709 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" podStartSLOduration=15.62168579 podStartE2EDuration="15.62168579s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:25.552237375 +0000 UTC m=+165.436430091" watchObservedRunningTime="2026-02-02 10:41:25.62168579 +0000 UTC m=+165.505878516" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.657293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.657802 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.157785042 +0000 UTC m=+166.041977758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.692690 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.694295 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.752472 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763137 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763627 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.763867 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.263847844 +0000 UTC m=+166.148040560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.866406 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.873347 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.874395 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.874538 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.875115 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.375092936 +0000 UTC m=+166.259285652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.874088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.870492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.902070 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.925266 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.981842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.982204 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.482180178 +0000 UTC m=+166.366372894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.031158 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.084539 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.084994 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.584978605 +0000 UTC m=+166.469171321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.114910 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:26 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:26 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:26 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.114977 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.133975 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.185436 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.185952 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.685920369 +0000 UTC m=+166.570113085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.247987 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.248080 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.286704 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.287048 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.787035099 +0000 UTC m=+166.671227815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.295706 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.308984 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.310633 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.329184 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.385585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerStarted","Data":"6ef548a38f0be82eadd409d2f97034be6e36d97b107a1699fdec8cc892db1b86"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387342 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387595 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387723 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.387818 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.887802958 +0000 UTC m=+166.771995674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.400883 4782 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.408172 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.485124 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerStarted","Data":"2157f695d84a6bf7a7c1d517b9438fd49370964e625a05cdc501c630222fe141"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.489930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490030 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490610 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.491000 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.990984566 +0000 UTC m=+166.875177282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.491451 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.492995 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.496049 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerStarted","Data":"73b78820ba7c680a8424285974274eccad1462f8f885155bd360dedc3c4644b7"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.514171 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerStarted","Data":"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.514216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerStarted","Data":"1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.535420 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.556804 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.591862 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.594071 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.09403439 +0000 UTC m=+166.978227126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.594526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.596190 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.096171002 +0000 UTC m=+166.980363718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.690033 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.690348 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerName="pruner" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.690363 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerName="pruner" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.690471 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerName="pruner" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.691411 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695405 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695480 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695627 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695672 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695763 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695793 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" (UID: "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.695982 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.195952723 +0000 UTC m=+167.080145619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.698418 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.740861 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.749455 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" (UID: "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798751 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798824 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798881 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798921 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798932 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.799282 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.299263495 +0000 UTC m=+167.183456211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.800105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.800317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.884452 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.885151 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.900037 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.904865 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.404839643 +0000 UTC m=+167.289032359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.028201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: E0202 10:41:27.030894 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.530873262 +0000 UTC m=+167.415066168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.032777 4782 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z8gmg container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]log ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]etcd ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/max-in-flight-filter ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 10:41:27 crc kubenswrapper[4782]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-startinformers ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 10:41:27 crc kubenswrapper[4782]: livez check failed Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.032845 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" podUID="1f4f42b8-506a-4922-b7c4-7f77afbb238c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.046947 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:27 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.046994 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.078021 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.080998 4782 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T10:41:26.401272897Z","Handler":null,"Name":""} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.135726 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:27 crc kubenswrapper[4782]: E0202 10:41:27.136207 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.636181392 +0000 UTC m=+167.520374108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.177915 4782 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.177963 4782 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.238876 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.307824 4782 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.307946 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.397806 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.440692 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.592787 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.593520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6","Type":"ContainerDied","Data":"2034a31847b2f294c83ea2d9717b4e214f2b19b44188b148a7af71f3915c9fe9"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.593548 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2034a31847b2f294c83ea2d9717b4e214f2b19b44188b148a7af71f3915c9fe9" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.611673 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerStarted","Data":"a77f63d55d27418e43d1dec8a78bc759af36972ea35d4cdd887b4e0dd5624442"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.637673 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.641894 4782 generic.go:334] "Generic (PLEG): container finished" podID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerID="e1f3c2a5262859f45791070cb15411b8e1b8e41e441cf2fae29b116544fe07c5" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.642538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"e1f3c2a5262859f45791070cb15411b8e1b8e41e441cf2fae29b116544fe07c5"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.653162 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.680603 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerStarted","Data":"568ce8fc0d55d9c475927a100a13079ea3c32843e1f085a43192f2b40f052173"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.708401 4782 generic.go:334] "Generic (PLEG): container finished" podID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.708500 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.724549 4782 generic.go:334] "Generic (PLEG): container finished" podID="a893973e-e0b3-426e-8bf1-7902687b7036" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.724709 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.724753 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerStarted","Data":"f8713e44a60ae45253bec2e5d10994fc19863aeccf7c6e956f5738780c8b26dd"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.726706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerStarted","Data":"be2ba6b14538fcbdaa668607f0c40b729fb85d67aa69815ab5d50dcd6b55a725"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.729988 4782 generic.go:334] "Generic (PLEG): container finished" podID="10039944-73fc-417b-925f-48a2985c277d" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.730025 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.758279 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.782937 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.042623 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:28 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:28 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:28 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.043069 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.051921 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.051894998 podStartE2EDuration="5.051894998s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:27.874310401 +0000 UTC m=+167.758503127" watchObservedRunningTime="2026-02-02 10:41:28.051894998 +0000 UTC m=+167.936087714" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.054674 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.085352 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:41:28 crc kubenswrapper[4782]: W0202 10:41:28.155698 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9a718cd_1b6d_483f_b995_938331c7e00e.slice/crio-4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6 WatchSource:0}: Error finding container 4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6: Status 404 returned error can't find the container with id 4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.541881 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.740622 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerStarted","Data":"a55c72e5f15ff42bfcfbbd5f83cbfe22e092ae45221bb6158bb15a9d235221ed"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.742853 4782 generic.go:334] "Generic (PLEG): container finished" podID="9832aa65-d498-4a21-b53a-ebc591328a00" containerID="b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.742938 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerDied","Data":"b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.750097 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.750768 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.750930 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerStarted","Data":"4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.754524 4782 generic.go:334] "Generic (PLEG): container finished" podID="9beb5599-8c2d-4493-9561-cc2781d32052" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.754698 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.778249 4782 generic.go:334] "Generic (PLEG): container finished" podID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerID="e85622bd784d09d56836a615239db13244c2d5b26841db53fd14e1ec1665771e" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.778748 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"e85622bd784d09d56836a615239db13244c2d5b26841db53fd14e1ec1665771e"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.778817 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerStarted","Data":"c20f4c43562c9a26701d05b9c48459ab9215c0f89e3d7636a6006f20e7c4c9aa"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.782147 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerID="be2ba6b14538fcbdaa668607f0c40b729fb85d67aa69815ab5d50dcd6b55a725" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.782274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerDied","Data":"be2ba6b14538fcbdaa668607f0c40b729fb85d67aa69815ab5d50dcd6b55a725"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.790391 4782 generic.go:334] "Generic (PLEG): container finished" podID="99330299-8910-4c41-b704-120a10eb799b" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.790423 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.821459 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.886400 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.047165 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:29 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:29 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:29 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.047286 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.984327 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerStarted","Data":"9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642"} Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.985297 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.018916 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" podStartSLOduration=142.018882354 podStartE2EDuration="2m22.018882354s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:30.010886433 +0000 UTC m=+169.895079149" watchObservedRunningTime="2026-02-02 10:41:30.018882354 +0000 UTC m=+169.903075070" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.042747 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:30 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:30 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:30 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.043590 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.524900 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.547988 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.578762 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.618338 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.646028 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682303 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"9832aa65-d498-4a21-b53a-ebc591328a00\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"9832aa65-d498-4a21-b53a-ebc591328a00\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"eb0fd85c-ce56-4874-989e-20a0c304efd1\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682486 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"eb0fd85c-ce56-4874-989e-20a0c304efd1\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682549 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"9832aa65-d498-4a21-b53a-ebc591328a00\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.684761 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb0fd85c-ce56-4874-989e-20a0c304efd1" (UID: "eb0fd85c-ce56-4874-989e-20a0c304efd1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.688934 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume" (OuterVolumeSpecName: "config-volume") pod "9832aa65-d498-4a21-b53a-ebc591328a00" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.691999 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.692040 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.705436 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp" (OuterVolumeSpecName: "kube-api-access-kzgdp") pod "9832aa65-d498-4a21-b53a-ebc591328a00" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00"). InnerVolumeSpecName "kube-api-access-kzgdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.706100 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9832aa65-d498-4a21-b53a-ebc591328a00" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.706249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb0fd85c-ce56-4874-989e-20a0c304efd1" (UID: "eb0fd85c-ce56-4874-989e-20a0c304efd1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.795255 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.795295 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.795306 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.010508 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.011028 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerDied","Data":"73b78820ba7c680a8424285974274eccad1462f8f885155bd360dedc3c4644b7"} Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.011068 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b78820ba7c680a8424285974274eccad1462f8f885155bd360dedc3c4644b7" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.025185 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.025676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerDied","Data":"366cbdbf60f3a0bf6cbbfb63bc5e1d0a80ef38264552b527daf3339d1fbe1798"} Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.025714 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366cbdbf60f3a0bf6cbbfb63bc5e1d0a80ef38264552b527daf3339d1fbe1798" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.040322 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:31 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:31 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:31 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.040440 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.320816 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tv4xc"] Feb 02 10:41:31 crc kubenswrapper[4782]: W0202 10:41:31.370122 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e23db96_3af7_4c29_b00f_5920a9431f01.slice/crio-0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472 WatchSource:0}: Error finding container 0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472: Status 404 returned error can't find the container with id 0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472 Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.830718 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.830826 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.830948 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.831034 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.970571 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.970788 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.007954 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.015148 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.042230 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:32 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:32 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:32 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.042307 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.116678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" event={"ID":"4e23db96-3af7-4c29-b00f-5920a9431f01","Type":"ContainerStarted","Data":"0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472"} Feb 02 10:41:33 crc kubenswrapper[4782]: I0202 10:41:33.042680 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:33 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:33 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:33 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:33 crc kubenswrapper[4782]: I0202 10:41:33.043049 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:33 crc kubenswrapper[4782]: I0202 10:41:33.191071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" event={"ID":"4e23db96-3af7-4c29-b00f-5920a9431f01","Type":"ContainerStarted","Data":"500187f503dbccc37b0645a02fa6097bb379cc819c65ca3847c0b5dd1c498f5d"} Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.039516 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:34 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:34 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:34 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.039698 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.256089 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" event={"ID":"4e23db96-3af7-4c29-b00f-5920a9431f01","Type":"ContainerStarted","Data":"95af337b0c1919520f82974ea1565d0b6269697b6423ebe2aabf4b5dbba97ff3"} Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.283924 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tv4xc" podStartSLOduration=146.283885501 podStartE2EDuration="2m26.283885501s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:34.281229595 +0000 UTC m=+174.165422321" watchObservedRunningTime="2026-02-02 10:41:34.283885501 +0000 UTC m=+174.168078217" Feb 02 10:41:35 crc kubenswrapper[4782]: I0202 10:41:35.042293 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:35 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:35 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:35 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:35 crc kubenswrapper[4782]: I0202 10:41:35.042627 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:36 crc kubenswrapper[4782]: I0202 10:41:36.041833 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:36 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:36 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:36 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:36 crc kubenswrapper[4782]: I0202 10:41:36.041897 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:37 crc kubenswrapper[4782]: I0202 10:41:37.040022 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:37 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:37 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:37 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:37 crc kubenswrapper[4782]: I0202 10:41:37.040081 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.039851 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:38 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:38 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:38 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.040247 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.600812 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.601008 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" containerID="cri-o://6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01" gracePeriod=30 Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.646028 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.646240 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" containerID="cri-o://43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b" gracePeriod=30 Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.043153 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:39 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:39 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:39 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.043235 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.378258 4782 generic.go:334] "Generic (PLEG): container finished" podID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerID="6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01" exitCode=0 Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.378340 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerDied","Data":"6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01"} Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.380549 4782 generic.go:334] "Generic (PLEG): container finished" podID="59a1b37a-9035-459b-a485-280325d33264" containerID="43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b" exitCode=0 Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.380583 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerDied","Data":"43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b"} Feb 02 10:41:40 crc kubenswrapper[4782]: I0202 10:41:40.039701 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:40 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:40 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:40 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:40 crc kubenswrapper[4782]: I0202 10:41:40.040116 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.051467 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:41 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:41 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:41 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.051946 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830453 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830504 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830509 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830567 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830556 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831015 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831046 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831339 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d"} pod="openshift-console/downloads-7954f5f757-4b45h" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831509 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" containerID="cri-o://10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d" gracePeriod=2 Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.923421 4782 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-96t4g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.923489 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.971826 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.971888 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.039138 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:42 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:42 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:42 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.039197 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.345103 4782 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l2hps container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.345174 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.040215 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:43 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:43 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:43 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.040311 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.427689 4782 generic.go:334] "Generic (PLEG): container finished" podID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerID="10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d" exitCode=0 Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.427736 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerDied","Data":"10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d"} Feb 02 10:41:44 crc kubenswrapper[4782]: I0202 10:41:44.040796 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:44 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:44 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:44 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:44 crc kubenswrapper[4782]: I0202 10:41:44.041555 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:45 crc kubenswrapper[4782]: I0202 10:41:45.039169 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:45 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:45 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:45 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:45 crc kubenswrapper[4782]: I0202 10:41:45.039265 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:46 crc kubenswrapper[4782]: I0202 10:41:46.040145 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:46 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:46 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:46 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:46 crc kubenswrapper[4782]: I0202 10:41:46.040213 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:47 crc kubenswrapper[4782]: I0202 10:41:47.039223 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:47 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:47 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:47 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:47 crc kubenswrapper[4782]: I0202 10:41:47.039285 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:47 crc kubenswrapper[4782]: I0202 10:41:47.794889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:48 crc kubenswrapper[4782]: I0202 10:41:48.039225 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:48 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:48 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:48 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:48 crc kubenswrapper[4782]: I0202 10:41:48.039299 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:49 crc kubenswrapper[4782]: I0202 10:41:49.039078 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:49 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:49 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:49 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:49 crc kubenswrapper[4782]: I0202 10:41:49.039135 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:49 crc kubenswrapper[4782]: I0202 10:41:49.125894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.040184 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:50 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:50 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:50 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.040272 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.180564 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218271 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:41:50 crc kubenswrapper[4782]: E0202 10:41:50.218756 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerName="pruner" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218784 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerName="pruner" Feb 02 10:41:50 crc kubenswrapper[4782]: E0202 10:41:50.218798 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218806 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" Feb 02 10:41:50 crc kubenswrapper[4782]: E0202 10:41:50.218820 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" containerName="collect-profiles" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218830 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" containerName="collect-profiles" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218941 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerName="pruner" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218956 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" containerName="collect-profiles" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218963 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.220012 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.231422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320396 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320457 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320584 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320698 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320907 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321039 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321074 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321858 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config" (OuterVolumeSpecName: "config") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca" (OuterVolumeSpecName: "client-ca") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.322122 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.333964 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf" (OuterVolumeSpecName: "kube-api-access-7dqmf") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "kube-api-access-7dqmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.339720 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.421896 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.421954 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.421990 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422108 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422118 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422131 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422141 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422150 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.423314 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.423317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.425600 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.426892 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.441606 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.491093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerDied","Data":"e1d3c6b879e919d9a8eeb6fc928bb73b1f6f10789e409d2d56a047e2a54eac9e"} Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.491180 4782 scope.go:117] "RemoveContainer" containerID="6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.491334 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.540160 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.544225 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.550350 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.828272 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" path="/var/lib/kubelet/pods/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc/volumes" Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.039876 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:51 crc kubenswrapper[4782]: [+]has-synced ok Feb 02 10:41:51 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:51 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.040018 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.832417 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.833102 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.971454 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.971504 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.040211 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.043009 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.923281 4782 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-96t4g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.923382 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.951711 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.951827 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:41:53 crc kubenswrapper[4782]: I0202 10:41:53.962727 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:58 crc kubenswrapper[4782]: I0202 10:41:58.610400 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.968734 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.970395 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.975124 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.975187 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.975343 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.976784 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.979033 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.075817 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.075921 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.075938 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.100464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.315433 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.530375 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.570742 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerDied","Data":"1bed6a14af1c27e28bfaf20957b4ab6debdecb60fbd87716abbb4a3205ddb87a"} Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.570819 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.580649 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:01 crc kubenswrapper[4782]: E0202 10:42:01.580935 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.580952 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581145 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581619 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581691 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581810 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581835 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581835 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.582420 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca" (OuterVolumeSpecName: "client-ca") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.582838 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.583672 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config" (OuterVolumeSpecName: "config") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.587843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5" (OuterVolumeSpecName: "kube-api-access-p9tz5") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "kube-api-access-p9tz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.594960 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.683932 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.683994 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684066 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684288 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684458 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684478 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684494 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.785911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.786001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.786043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.786098 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.787819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.789399 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.792376 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.802565 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.831470 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.831582 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.903542 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.909693 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.937912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.975067 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.979570 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:42:02 crc kubenswrapper[4782]: I0202 10:42:02.828976 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a1b37a-9035-459b-a485-280325d33264" path="/var/lib/kubelet/pods/59a1b37a-9035-459b-a485-280325d33264/volumes" Feb 02 10:42:04 crc kubenswrapper[4782]: I0202 10:42:04.974500 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:42:04 crc kubenswrapper[4782]: I0202 10:42:04.975803 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:04 crc kubenswrapper[4782]: I0202 10:42:04.979039 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.034552 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.034828 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.034906 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.135896 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136049 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136114 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136275 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.157854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.316817 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.034435 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.034993 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h72b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xmt8t_openshift-marketplace(213698f8-d1b6-489f-8fc4-a69583d4fc2e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.037931 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.237006 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.237203 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mqs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-g65rt_openshift-marketplace(d9a718cd-1b6d-483f-b995-938331c7e00e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.238566 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" Feb 02 10:42:10 crc kubenswrapper[4782]: E0202 10:42:10.590690 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" Feb 02 10:42:10 crc kubenswrapper[4782]: E0202 10:42:10.591714 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.093984 4782 scope.go:117] "RemoveContainer" containerID="43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.202277 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.202419 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlmlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8tk99_openshift-marketplace(9beb5599-8c2d-4493-9561-cc2781d32052): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.203827 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8tk99" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.505740 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.506330 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb5v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8g5bv_openshift-marketplace(a893973e-e0b3-426e-8bf1-7902687b7036): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.508657 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8g5bv" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.628959 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerStarted","Data":"3d79dc7da7d2f8fe083b258d3fc741f3697071dd145f2ddcb0763fccf6144932"} Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.631790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerStarted","Data":"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd"} Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.639099 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8tk99" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.641486 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8g5bv" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.669905 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.670110 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r7ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-khjwl_openshift-marketplace(99330299-8910-4c41-b704-120a10eb799b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.673355 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-khjwl" podUID="99330299-8910-4c41-b704-120a10eb799b" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.746229 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.818273 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.830748 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.830827 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.863540 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.877367 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.649992 4782 generic.go:334] "Generic (PLEG): container finished" podID="10039944-73fc-417b-925f-48a2985c277d" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" exitCode=0 Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.651808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.655361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerStarted","Data":"48f5b10da2bba632a14a6ad3feb2a4777b1c0f0ff7a5ac4b7850c09cc0320f84"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.671658 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerStarted","Data":"2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.686628 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerStarted","Data":"acb92178b080f16f9482f40e0b16c2c17b6094a867d22c2de5c7014e8aa3b4cd"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.691243 4782 generic.go:334] "Generic (PLEG): container finished" podID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" exitCode=0 Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.691429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.712543 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerStarted","Data":"57759453f91f6f1ae38bb4987e54806618ccbdad87e7c2e009c76f00cce3bbb3"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.716480 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerStarted","Data":"eb8b4451b7251617cfeca26bf86a321d4715359dfd467ddc355a4e63b2aa0184"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.717215 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.717343 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.717425 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:12 crc kubenswrapper[4782]: E0202 10:42:12.721584 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-khjwl" podUID="99330299-8910-4c41-b704-120a10eb799b" Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.833693 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerStarted","Data":"daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.835812 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerStarted","Data":"b1b049c4e6c69853b509050eecbd80ccb18b26e4d2dbfe0e74f5c388c9a1cb17"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.837234 4782 generic.go:334] "Generic (PLEG): container finished" podID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerID="2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9" exitCode=0 Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.837276 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.843503 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerStarted","Data":"33a3c62f71f1956073f3a08b721e64058cac19abb9f4f54ee5048a1701d7cade"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.844674 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.844764 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.446665 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.851522 4782 generic.go:334] "Generic (PLEG): container finished" podID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerID="b1b049c4e6c69853b509050eecbd80ccb18b26e4d2dbfe0e74f5c388c9a1cb17" exitCode=0 Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.851595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerDied","Data":"b1b049c4e6c69853b509050eecbd80ccb18b26e4d2dbfe0e74f5c388c9a1cb17"} Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.855404 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" containerID="cri-o://1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47" gracePeriod=30 Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.855887 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerStarted","Data":"1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47"} Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.855939 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.856578 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.861914 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.862216 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.927113 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" podStartSLOduration=16.927088592 podStartE2EDuration="16.927088592s" podCreationTimestamp="2026-02-02 10:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:14.900911475 +0000 UTC m=+214.785104211" watchObservedRunningTime="2026-02-02 10:42:14.927088592 +0000 UTC m=+214.811281308" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.927250 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.927243806 podStartE2EDuration="10.927243806s" podCreationTimestamp="2026-02-02 10:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:14.923603091 +0000 UTC m=+214.807795807" watchObservedRunningTime="2026-02-02 10:42:14.927243806 +0000 UTC m=+214.811436542" Feb 02 10:42:15 crc kubenswrapper[4782]: I0202 10:42:15.862320 4782 generic.go:334] "Generic (PLEG): container finished" podID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerID="1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47" exitCode=0 Feb 02 10:42:15 crc kubenswrapper[4782]: I0202 10:42:15.862412 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerDied","Data":"1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47"} Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.097439 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.124094 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" podStartSLOduration=38.124068647 podStartE2EDuration="38.124068647s" podCreationTimestamp="2026-02-02 10:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:14.972986767 +0000 UTC m=+214.857179483" watchObservedRunningTime="2026-02-02 10:42:16.124068647 +0000 UTC m=+216.008261363" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.216874 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.216996 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.217300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f4da25e-551a-4f31-9ee0-fb20b4589dfd" (UID: "6f4da25e-551a-4f31-9ee0-fb20b4589dfd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.222560 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f4da25e-551a-4f31-9ee0-fb20b4589dfd" (UID: "6f4da25e-551a-4f31-9ee0-fb20b4589dfd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.318364 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.318682 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.869249 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerDied","Data":"48f5b10da2bba632a14a6ad3feb2a4777b1c0f0ff7a5ac4b7850c09cc0320f84"} Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.869289 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f5b10da2bba632a14a6ad3feb2a4777b1c0f0ff7a5ac4b7850c09cc0320f84" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.869449 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.328628 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.429793 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.429892 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.430008 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.430051 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.430083 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.431018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.431042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config" (OuterVolumeSpecName: "config") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.431125 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.436352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.442988 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr" (OuterVolumeSpecName: "kube-api-access-g8gfr") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "kube-api-access-g8gfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531454 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531495 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531506 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531515 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531525 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.875566 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerDied","Data":"57759453f91f6f1ae38bb4987e54806618ccbdad87e7c2e009c76f00cce3bbb3"} Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.875924 4782 scope.go:117] "RemoveContainer" containerID="1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.876045 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.913602 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.916974 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:18 crc kubenswrapper[4782]: I0202 10:42:18.827721 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" path="/var/lib/kubelet/pods/5d56934a-19d0-4c31-a6df-afcabaa1ed24/volumes" Feb 02 10:42:19 crc kubenswrapper[4782]: I0202 10:42:19.892386 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerStarted","Data":"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4"} Feb 02 10:42:19 crc kubenswrapper[4782]: I0202 10:42:19.916817 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxwg2" podStartSLOduration=5.5414169829999995 podStartE2EDuration="56.916799593s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:26.534920654 +0000 UTC m=+166.419113370" lastFinishedPulling="2026-02-02 10:42:17.910303274 +0000 UTC m=+217.794495980" observedRunningTime="2026-02-02 10:42:19.91530865 +0000 UTC m=+219.799501366" watchObservedRunningTime="2026-02-02 10:42:19.916799593 +0000 UTC m=+219.800992309" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.621317 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:20 crc kubenswrapper[4782]: E0202 10:42:20.624876 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerName="pruner" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625032 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerName="pruner" Feb 02 10:42:20 crc kubenswrapper[4782]: E0202 10:42:20.625144 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625229 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625476 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerName="pruner" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625566 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.626197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.628232 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686120 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686248 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686670 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686895 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.691005 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.691243 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.691593 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.788263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.788895 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.789075 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.789210 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.789353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890759 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890847 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890927 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890964 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.892348 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.892402 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.892481 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.900710 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.915920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.015142 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.618477 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:21 crc kubenswrapper[4782]: W0202 10:42:21.621394 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e24dc2e_1431_4589_b097_598780357e04.slice/crio-8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2 WatchSource:0}: Error finding container 8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2: Status 404 returned error can't find the container with id 8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2 Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.830800 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.831229 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.830832 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.831325 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.907340 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerStarted","Data":"8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2"} Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.914573 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerStarted","Data":"66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f"} Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.916058 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.920954 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.921299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerStarted","Data":"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5"} Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.938989 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" podStartSLOduration=24.93897031 podStartE2EDuration="24.93897031s" podCreationTimestamp="2026-02-02 10:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:22.938382553 +0000 UTC m=+222.822575269" watchObservedRunningTime="2026-02-02 10:42:22.93897031 +0000 UTC m=+222.823163026" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.950911 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.950982 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.951048 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.951597 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.951669 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810" gracePeriod=600 Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.968883 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vzzf" podStartSLOduration=6.298205455 podStartE2EDuration="59.968867244s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:27.715794345 +0000 UTC m=+167.599987061" lastFinishedPulling="2026-02-02 10:42:21.386456134 +0000 UTC m=+221.270648850" observedRunningTime="2026-02-02 10:42:22.9593884 +0000 UTC m=+222.843581116" watchObservedRunningTime="2026-02-02 10:42:22.968867244 +0000 UTC m=+222.853059950" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.585477 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.585982 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.623330 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.623374 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.969775 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810" exitCode=0 Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.969879 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810"} Feb 02 10:42:24 crc kubenswrapper[4782]: I0202 10:42:24.978714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerStarted","Data":"85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788"} Feb 02 10:42:24 crc kubenswrapper[4782]: I0202 10:42:24.981230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25"} Feb 02 10:42:24 crc kubenswrapper[4782]: I0202 10:42:24.997345 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5852s" podStartSLOduration=6.190952776 podStartE2EDuration="1m1.997326267s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:27.652168948 +0000 UTC m=+167.536361664" lastFinishedPulling="2026-02-02 10:42:23.458542439 +0000 UTC m=+223.342735155" observedRunningTime="2026-02-02 10:42:24.995162924 +0000 UTC m=+224.879355660" watchObservedRunningTime="2026-02-02 10:42:24.997326267 +0000 UTC m=+224.881518983" Feb 02 10:42:25 crc kubenswrapper[4782]: I0202 10:42:25.107473 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lxwg2" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:25 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:25 crc kubenswrapper[4782]: > Feb 02 10:42:25 crc kubenswrapper[4782]: I0202 10:42:25.108277 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8vzzf" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:25 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:25 crc kubenswrapper[4782]: > Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.019571 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerStarted","Data":"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.022062 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerStarted","Data":"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.024936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerStarted","Data":"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.033395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerStarted","Data":"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.037166 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerStarted","Data":"66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.837138 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.053312 4782 generic.go:334] "Generic (PLEG): container finished" podID="99330299-8910-4c41-b704-120a10eb799b" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" exitCode=0 Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.053396 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.060477 4782 generic.go:334] "Generic (PLEG): container finished" podID="9beb5599-8c2d-4493-9561-cc2781d32052" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" exitCode=0 Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.060526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.064686 4782 generic.go:334] "Generic (PLEG): container finished" podID="a893973e-e0b3-426e-8bf1-7902687b7036" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" exitCode=0 Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.064808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.066399 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.066417 4782 generic.go:334] "Generic (PLEG): container finished" podID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerID="66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617" exitCode=0 Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.074420 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" exitCode=0 Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.074814 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469"} Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.648238 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.667513 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.704847 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.722036 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:34 crc kubenswrapper[4782]: I0202 10:42:34.158793 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:34 crc kubenswrapper[4782]: I0202 10:42:34.159236 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:34 crc kubenswrapper[4782]: I0202 10:42:34.291184 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:35 crc kubenswrapper[4782]: I0202 10:42:35.120667 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.098013 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerStarted","Data":"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1"} Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.122747 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8g5bv" podStartSLOduration=5.848415542 podStartE2EDuration="1m14.122733508s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:27.73293112 +0000 UTC m=+167.617123836" lastFinishedPulling="2026-02-02 10:42:36.007249086 +0000 UTC m=+235.891441802" observedRunningTime="2026-02-02 10:42:37.119364581 +0000 UTC m=+237.003557297" watchObservedRunningTime="2026-02-02 10:42:37.122733508 +0000 UTC m=+237.006926224" Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.495599 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.495917 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5852s" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" containerID="cri-o://85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788" gracePeriod=2 Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.583915 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.584489 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" containerID="cri-o://66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f" gracePeriod=30 Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.676867 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.677094 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" containerID="cri-o://daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e" gracePeriod=30 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.112060 4782 generic.go:334] "Generic (PLEG): container finished" podID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerID="85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788" exitCode=0 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.112139 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788"} Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.114502 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e24dc2e-1431-4589-b097-598780357e04" containerID="66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f" exitCode=0 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.114589 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerDied","Data":"66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f"} Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.117317 4782 generic.go:334] "Generic (PLEG): container finished" podID="2227870a-e9fb-429e-a495-cfa17761d275" containerID="daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e" exitCode=0 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.117368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerDied","Data":"daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e"} Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.510203 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" containerID="cri-o://df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07" gracePeriod=15 Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.125528 4782 generic.go:334] "Generic (PLEG): container finished" podID="03d47200-aed2-431d-89fd-c27cdd91564f" containerID="df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07" exitCode=0 Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.125595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerDied","Data":"df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07"} Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.288206 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.292619 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.357844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.357993 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358033 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358141 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358164 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358217 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.359185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.364893 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config" (OuterVolumeSpecName: "config") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.365176 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.365535 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities" (OuterVolumeSpecName: "utilities") pod "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" (UID: "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.366507 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl" (OuterVolumeSpecName: "kube-api-access-hwnrl") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "kube-api-access-hwnrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.369042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc" (OuterVolumeSpecName: "kube-api-access-vkzrc") pod "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" (UID: "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d"). InnerVolumeSpecName "kube-api-access-vkzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.369570 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.435784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" (UID: "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459902 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459941 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459951 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459963 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459971 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459982 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459990 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.460000 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.047810 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.133140 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerDied","Data":"8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2"} Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.133207 4782 scope.go:117] "RemoveContainer" containerID="66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.133218 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.138915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerDied","Data":"eb8b4451b7251617cfeca26bf86a321d4715359dfd467ddc355a4e63b2aa0184"} Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.138921 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.143675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"6ef548a38f0be82eadd409d2f97034be6e36d97b107a1699fdec8cc892db1b86"} Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.143777 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.158123 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.161699 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172263 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172327 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172374 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.173382 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca" (OuterVolumeSpecName: "client-ca") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.173407 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config" (OuterVolumeSpecName: "config") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.175898 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.177651 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.177885 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25" (OuterVolumeSpecName: "kube-api-access-szf25") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "kube-api-access-szf25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.178570 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273780 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273830 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273841 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273853 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.468148 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.471825 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.488263 4782 scope.go:117] "RemoveContainer" containerID="daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.525045 4782 scope.go:117] "RemoveContainer" containerID="85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.592820 4782 scope.go:117] "RemoveContainer" containerID="2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.625104 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.671414 4782 scope.go:117] "RemoveContainer" containerID="e1f3c2a5262859f45791070cb15411b8e1b8e41e441cf2fae29b116544fe07c5" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.681923 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.681966 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682003 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682058 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682074 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682104 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682123 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682157 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682179 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682222 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682246 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682270 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682299 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682332 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682746 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.684585 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.685518 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.685528 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.693052 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.695148 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt" (OuterVolumeSpecName: "kube-api-access-vh2jt") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "kube-api-access-vh2jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.695720 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.703578 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.707596 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.708707 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.713516 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.717967 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.718719 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.726649 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783685 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783728 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783742 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783755 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783769 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783781 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783793 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783805 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783817 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783829 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783840 4782 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783850 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783862 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783876 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.152907 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.153689 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerDied","Data":"3ff1f99d47a76aef7148a44cb594fe9fccb90137af935d28016f31e0538f0f1c"} Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.153829 4782 scope.go:117] "RemoveContainer" containerID="df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.184051 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.189499 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.688618 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.689353 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.689518 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.689660 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-utilities" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.689763 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-utilities" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.689886 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.689985 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.690070 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-content" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690161 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-content" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.690301 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690392 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.690487 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690603 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690870 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690982 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691102 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691200 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691750 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.693198 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.696912 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.697112 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.697267 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.697459 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.701950 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.702077 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.702703 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.707258 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.729117 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.741961 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.743408 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.744602 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.745092 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.755433 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.773065 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800685 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800764 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800823 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800856 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801153 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801177 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801274 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.832424 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" path="/var/lib/kubelet/pods/03d47200-aed2-431d-89fd-c27cdd91564f/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.833133 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2227870a-e9fb-429e-a495-cfa17761d275" path="/var/lib/kubelet/pods/2227870a-e9fb-429e-a495-cfa17761d275/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.834024 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" path="/var/lib/kubelet/pods/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.835415 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e24dc2e-1431-4589-b097-598780357e04" path="/var/lib/kubelet/pods/9e24dc2e-1431-4589-b097-598780357e04/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902728 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902755 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902770 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902799 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902821 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902840 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902871 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902892 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.904707 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.905372 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.905540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.906406 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.906812 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.912557 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.922986 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.924700 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.929010 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.040078 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.057264 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.199168 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerStarted","Data":"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.207030 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerStarted","Data":"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.231429 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g65rt" podStartSLOduration=4.5029323770000005 podStartE2EDuration="1m17.231407231s" podCreationTimestamp="2026-02-02 10:41:26 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.752479014 +0000 UTC m=+168.636671740" lastFinishedPulling="2026-02-02 10:42:41.480953878 +0000 UTC m=+241.365146594" observedRunningTime="2026-02-02 10:42:43.229108495 +0000 UTC m=+243.113301211" watchObservedRunningTime="2026-02-02 10:42:43.231407231 +0000 UTC m=+243.115599947" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.256526 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8tk99" podStartSLOduration=8.356861045 podStartE2EDuration="1m18.256504066s" podCreationTimestamp="2026-02-02 10:41:25 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.761231886 +0000 UTC m=+168.645424602" lastFinishedPulling="2026-02-02 10:42:38.660874907 +0000 UTC m=+238.545067623" observedRunningTime="2026-02-02 10:42:43.255038344 +0000 UTC m=+243.139231050" watchObservedRunningTime="2026-02-02 10:42:43.256504066 +0000 UTC m=+243.140696792" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.263162 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerStarted","Data":"6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.275198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerStarted","Data":"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.283730 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xmt8t" podStartSLOduration=4.579748335 podStartE2EDuration="1m17.283715332s" podCreationTimestamp="2026-02-02 10:41:26 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.780864463 +0000 UTC m=+168.665057179" lastFinishedPulling="2026-02-02 10:42:41.48483146 +0000 UTC m=+241.369024176" observedRunningTime="2026-02-02 10:42:43.280211961 +0000 UTC m=+243.164404677" watchObservedRunningTime="2026-02-02 10:42:43.283715332 +0000 UTC m=+243.167908048" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.319767 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-khjwl" podStartSLOduration=6.08709926 podStartE2EDuration="1m18.319749543s" podCreationTimestamp="2026-02-02 10:41:25 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.799099979 +0000 UTC m=+168.683292695" lastFinishedPulling="2026-02-02 10:42:41.031750262 +0000 UTC m=+240.915942978" observedRunningTime="2026-02-02 10:42:43.311043261 +0000 UTC m=+243.195235977" watchObservedRunningTime="2026-02-02 10:42:43.319749543 +0000 UTC m=+243.203942249" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.450154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.519892 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:42:43 crc kubenswrapper[4782]: W0202 10:42:43.537437 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e30f31e_9e81_4b3f_a680_a84918f9e7ec.slice/crio-944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0 WatchSource:0}: Error finding container 944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0: Status 404 returned error can't find the container with id 944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0 Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.885970 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.886018 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.934418 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.292992 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerStarted","Data":"31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.293048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerStarted","Data":"6d2418109eeeba4b0106f128a727272504228d5ce1b9780ebff9ed573127420d"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.293353 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.297996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerStarted","Data":"9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.298071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerStarted","Data":"944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.298392 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.320293 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.324554 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.330282 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" podStartSLOduration=6.330265523 podStartE2EDuration="6.330265523s" podCreationTimestamp="2026-02-02 10:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:44.328180322 +0000 UTC m=+244.212373038" watchObservedRunningTime="2026-02-02 10:42:44.330265523 +0000 UTC m=+244.214458239" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.396825 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" podStartSLOduration=6.396805575 podStartE2EDuration="6.396805575s" podCreationTimestamp="2026-02-02 10:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:44.396730692 +0000 UTC m=+244.280923408" watchObservedRunningTime="2026-02-02 10:42:44.396805575 +0000 UTC m=+244.280998291" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.406408 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.031546 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.031713 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.087531 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.134597 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.134960 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.180046 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.699997 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.700061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.078679 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.079349 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.398929 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.404991 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.458725 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.458957 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8g5bv" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" containerID="cri-o://01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" gracePeriod=2 Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.736409 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:47 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:47 crc kubenswrapper[4782]: > Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.913818 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.982171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"a893973e-e0b3-426e-8bf1-7902687b7036\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.982301 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"a893973e-e0b3-426e-8bf1-7902687b7036\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.982324 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"a893973e-e0b3-426e-8bf1-7902687b7036\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.987836 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities" (OuterVolumeSpecName: "utilities") pod "a893973e-e0b3-426e-8bf1-7902687b7036" (UID: "a893973e-e0b3-426e-8bf1-7902687b7036"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.000923 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4" (OuterVolumeSpecName: "kube-api-access-nb5v4") pod "a893973e-e0b3-426e-8bf1-7902687b7036" (UID: "a893973e-e0b3-426e-8bf1-7902687b7036"). InnerVolumeSpecName "kube-api-access-nb5v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.038097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a893973e-e0b3-426e-8bf1-7902687b7036" (UID: "a893973e-e0b3-426e-8bf1-7902687b7036"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.083955 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.084006 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.084021 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.116950 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:48 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:48 crc kubenswrapper[4782]: > Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370068 4782 generic.go:334] "Generic (PLEG): container finished" podID="a893973e-e0b3-426e-8bf1-7902687b7036" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" exitCode=0 Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370127 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1"} Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370186 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"f8713e44a60ae45253bec2e5d10994fc19863aeccf7c6e956f5738780c8b26dd"} Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370187 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370207 4782 scope.go:117] "RemoveContainer" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.387999 4782 scope.go:117] "RemoveContainer" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.404372 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.404416 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.410727 4782 scope.go:117] "RemoveContainer" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.423194 4782 scope.go:117] "RemoveContainer" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.424521 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1\": container with ID starting with 01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1 not found: ID does not exist" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.424753 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1"} err="failed to get container status \"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1\": rpc error: code = NotFound desc = could not find container \"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1\": container with ID starting with 01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1 not found: ID does not exist" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.424882 4782 scope.go:117] "RemoveContainer" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.425374 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9\": container with ID starting with 802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9 not found: ID does not exist" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.425433 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9"} err="failed to get container status \"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9\": rpc error: code = NotFound desc = could not find container \"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9\": container with ID starting with 802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9 not found: ID does not exist" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.425471 4782 scope.go:117] "RemoveContainer" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.426305 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c\": container with ID starting with c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c not found: ID does not exist" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.426456 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c"} err="failed to get container status \"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c\": rpc error: code = NotFound desc = could not find container \"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c\": container with ID starting with c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c not found: ID does not exist" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695698 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-wfb2c"] Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.695931 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695943 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.695951 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-utilities" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695957 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-utilities" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.695978 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-content" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695985 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-content" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.696082 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.696549 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.702767 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.702843 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.705534 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.705983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706052 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706116 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706232 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706333 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706415 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.707571 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.707828 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.720906 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.723223 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.727459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-wfb2c"] Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.731065 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.737437 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795005 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rgm6\" (UniqueName: \"kubernetes.io/projected/c29682a5-f95a-4209-a484-db8524d68df6-kube-api-access-9rgm6\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-audit-policies\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c29682a5-f95a-4209-a484-db8524d68df6-audit-dir\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795166 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795182 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795203 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795242 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795262 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795281 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795300 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795318 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.827932 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" path="/var/lib/kubelet/pods/a893973e-e0b3-426e-8bf1-7902687b7036/volumes" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896787 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896833 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rgm6\" (UniqueName: \"kubernetes.io/projected/c29682a5-f95a-4209-a484-db8524d68df6-kube-api-access-9rgm6\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-audit-policies\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c29682a5-f95a-4209-a484-db8524d68df6-audit-dir\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896949 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896988 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897010 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897030 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897104 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.898907 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.901101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.901697 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c29682a5-f95a-4209-a484-db8524d68df6-audit-dir\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.902084 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-audit-policies\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.904590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.906203 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.906734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.909016 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.910059 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.910394 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.910767 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.911611 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.912039 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.923634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rgm6\" (UniqueName: \"kubernetes.io/projected/c29682a5-f95a-4209-a484-db8524d68df6-kube-api-access-9rgm6\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.020701 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.447115 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-wfb2c"] Feb 02 10:42:49 crc kubenswrapper[4782]: W0202 10:42:49.452659 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc29682a5_f95a_4209_a484_db8524d68df6.slice/crio-e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328 WatchSource:0}: Error finding container e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328: Status 404 returned error can't find the container with id e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328 Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.855936 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.856405 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-khjwl" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" containerID="cri-o://e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" gracePeriod=2 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.338069 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384490 4782 generic.go:334] "Generic (PLEG): container finished" podID="99330299-8910-4c41-b704-120a10eb799b" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" exitCode=0 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384554 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384582 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384680 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"a77f63d55d27418e43d1dec8a78bc759af36972ea35d4cdd887b4e0dd5624442"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384706 4782 scope.go:117] "RemoveContainer" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.389607 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" event={"ID":"c29682a5-f95a-4209-a484-db8524d68df6","Type":"ContainerStarted","Data":"d68bb677536422b81d61d2032951e7962b3a067a30743b7a9786e94e9d33bbf4"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.389727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" event={"ID":"c29682a5-f95a-4209-a484-db8524d68df6","Type":"ContainerStarted","Data":"e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.390129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.395607 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.402868 4782 scope.go:117] "RemoveContainer" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.421679 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"99330299-8910-4c41-b704-120a10eb799b\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.421759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"99330299-8910-4c41-b704-120a10eb799b\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.421814 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"99330299-8910-4c41-b704-120a10eb799b\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.424867 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities" (OuterVolumeSpecName: "utilities") pod "99330299-8910-4c41-b704-120a10eb799b" (UID: "99330299-8910-4c41-b704-120a10eb799b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.430881 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff" (OuterVolumeSpecName: "kube-api-access-2r7ff") pod "99330299-8910-4c41-b704-120a10eb799b" (UID: "99330299-8910-4c41-b704-120a10eb799b"). InnerVolumeSpecName "kube-api-access-2r7ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.433461 4782 scope.go:117] "RemoveContainer" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.459555 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99330299-8910-4c41-b704-120a10eb799b" (UID: "99330299-8910-4c41-b704-120a10eb799b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.462458 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" podStartSLOduration=36.462431034 podStartE2EDuration="36.462431034s" podCreationTimestamp="2026-02-02 10:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:50.423526151 +0000 UTC m=+250.307718867" watchObservedRunningTime="2026-02-02 10:42:50.462431034 +0000 UTC m=+250.346623750" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.465588 4782 scope.go:117] "RemoveContainer" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.467814 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8\": container with ID starting with e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8 not found: ID does not exist" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.467848 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8"} err="failed to get container status \"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8\": rpc error: code = NotFound desc = could not find container \"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8\": container with ID starting with e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8 not found: ID does not exist" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.467870 4782 scope.go:117] "RemoveContainer" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.468085 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1\": container with ID starting with 8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1 not found: ID does not exist" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.468110 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1"} err="failed to get container status \"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1\": rpc error: code = NotFound desc = could not find container \"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1\": container with ID starting with 8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1 not found: ID does not exist" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.468124 4782 scope.go:117] "RemoveContainer" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.468288 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4\": container with ID starting with 4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4 not found: ID does not exist" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.468309 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4"} err="failed to get container status \"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4\": rpc error: code = NotFound desc = could not find container \"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4\": container with ID starting with 4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4 not found: ID does not exist" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.523825 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.523869 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.523882 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.719332 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.727817 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.827830 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99330299-8910-4c41-b704-120a10eb799b" path="/var/lib/kubelet/pods/99330299-8910-4c41-b704-120a10eb799b/volumes" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904077 4782 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.904337 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904351 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.904372 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-content" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904378 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-content" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.904391 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-utilities" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904398 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-utilities" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904498 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904842 4782 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.905121 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.905282 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.905855 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.906027 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.906081 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.906213 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.907953 4782 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908187 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908198 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908214 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908220 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908232 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908373 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908381 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908393 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908399 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908405 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908413 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908420 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908433 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908439 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908533 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908542 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908550 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908558 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908565 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908571 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908578 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908721 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908729 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.001221 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.018101 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.018195 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.030998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031171 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031335 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031392 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031413 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031542 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133354 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133389 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133430 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133455 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133445 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133500 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133496 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133522 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133653 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133681 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133689 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.295983 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: W0202 10:42:51.318170 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65 WatchSource:0}: Error finding container 1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65: Status 404 returned error can't find the container with id 1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65 Feb 02 10:42:51 crc kubenswrapper[4782]: E0202 10:42:51.321251 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189067f8adb7269c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,LastTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.399193 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.400588 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401442 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401552 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401641 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401538 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401738 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" exitCode=2 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.404180 4782 generic.go:334] "Generic (PLEG): container finished" podID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerID="33a3c62f71f1956073f3a08b721e64058cac19abb9f4f54ee5048a1701d7cade" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.404234 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerDied","Data":"33a3c62f71f1956073f3a08b721e64058cac19abb9f4f54ee5048a1701d7cade"} Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.404903 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.405174 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.405491 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.406090 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65"} Feb 02 10:42:51 crc kubenswrapper[4782]: E0202 10:42:51.697677 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189067f8adb7269c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,LastTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.001344 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.001422 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.419596 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.431356 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1"} Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.431463 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.431859 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.432363 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.527022 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:16bd4f1f638e2804c94376e7aeb23a5f7c4d4454daea701355b4ddc9cf56c32b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:60ba1ea91ee5da37bae2691ab5afcbce9a0a9a358b560cf9dfa3d0ed31d0f68d\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1677305094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2a90cf243fbfd094eb63d7a2a33273de7c2f1514b6cd1c79c41877afea08b6fb\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ed24521cd932d4c0868705817ce9245137b311c52f233ce6070bc1d8c801494b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201985265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2670356312bbb840d7febc1ea21dc5e4918a25688063c071665b3750c5c57fc4\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:afd352e0300c7d4fcdd48a3b0ee053b3a6f1f3be3e8dd47ee68a17be62d779d9\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1191129845},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.527587 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.527924 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.528134 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.528306 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.528319 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.793121 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.794187 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.794556 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.795045 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.858901 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"bf2939f4-fa35-4f01-a896-2ddc746ac111\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.858953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"bf2939f4-fa35-4f01-a896-2ddc746ac111\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859078 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"bf2939f4-fa35-4f01-a896-2ddc746ac111\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf2939f4-fa35-4f01-a896-2ddc746ac111" (UID: "bf2939f4-fa35-4f01-a896-2ddc746ac111"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859281 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf2939f4-fa35-4f01-a896-2ddc746ac111" (UID: "bf2939f4-fa35-4f01-a896-2ddc746ac111"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859580 4782 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859606 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.864872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf2939f4-fa35-4f01-a896-2ddc746ac111" (UID: "bf2939f4-fa35-4f01-a896-2ddc746ac111"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.960933 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.378394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.380046 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.380680 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.382402 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.382600 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.439290 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.440844 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" exitCode=0 Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.440943 4782 scope.go:117] "RemoveContainer" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.441045 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.444230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerDied","Data":"acb92178b080f16f9482f40e0b16c2c17b6094a867d22c2de5c7014e8aa3b4cd"} Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.444294 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb92178b080f16f9482f40e0b16c2c17b6094a867d22c2de5c7014e8aa3b4cd" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.444330 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.459144 4782 scope.go:117] "RemoveContainer" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.465198 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.465472 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.465768 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474381 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474477 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474502 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474880 4782 scope.go:117] "RemoveContainer" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474997 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.475037 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.475104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.486728 4782 scope.go:117] "RemoveContainer" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.497063 4782 scope.go:117] "RemoveContainer" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.509720 4782 scope.go:117] "RemoveContainer" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.524987 4782 scope.go:117] "RemoveContainer" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.525492 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\": container with ID starting with cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60 not found: ID does not exist" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525529 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60"} err="failed to get container status \"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\": rpc error: code = NotFound desc = could not find container \"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\": container with ID starting with cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525550 4782 scope.go:117] "RemoveContainer" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.525841 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\": container with ID starting with f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805 not found: ID does not exist" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525861 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805"} err="failed to get container status \"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\": rpc error: code = NotFound desc = could not find container \"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\": container with ID starting with f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525877 4782 scope.go:117] "RemoveContainer" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.526227 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\": container with ID starting with 9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded not found: ID does not exist" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526250 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded"} err="failed to get container status \"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\": rpc error: code = NotFound desc = could not find container \"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\": container with ID starting with 9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526267 4782 scope.go:117] "RemoveContainer" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.526511 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\": container with ID starting with 1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8 not found: ID does not exist" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526536 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8"} err="failed to get container status \"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\": rpc error: code = NotFound desc = could not find container \"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\": container with ID starting with 1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526554 4782 scope.go:117] "RemoveContainer" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.526809 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\": container with ID starting with 66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0 not found: ID does not exist" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526841 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0"} err="failed to get container status \"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\": rpc error: code = NotFound desc = could not find container \"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\": container with ID starting with 66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526859 4782 scope.go:117] "RemoveContainer" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.527081 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\": container with ID starting with 2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e not found: ID does not exist" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.527106 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e"} err="failed to get container status \"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\": rpc error: code = NotFound desc = could not find container \"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\": container with ID starting with 2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.576299 4782 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.576345 4782 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.576361 4782 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.755503 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.756088 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.756386 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:54 crc kubenswrapper[4782]: I0202 10:42:54.831709 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.746194 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.746964 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.747331 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.747832 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.783469 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.783947 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.784208 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.784380 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.123394 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.123805 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.124180 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.124483 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.124777 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.171998 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.172547 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.172909 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.173211 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.173897 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.824620 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.825750 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.826127 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.826435 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.901034 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.901388 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.901771 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.902090 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.902416 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.902451 4782 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.902736 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 02 10:43:01 crc kubenswrapper[4782]: E0202 10:43:01.103198 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 02 10:43:01 crc kubenswrapper[4782]: E0202 10:43:01.503778 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 02 10:43:01 crc kubenswrapper[4782]: E0202 10:43:01.698898 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189067f8adb7269c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,LastTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.304934 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.888375 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:16bd4f1f638e2804c94376e7aeb23a5f7c4d4454daea701355b4ddc9cf56c32b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:60ba1ea91ee5da37bae2691ab5afcbce9a0a9a358b560cf9dfa3d0ed31d0f68d\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1677305094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2a90cf243fbfd094eb63d7a2a33273de7c2f1514b6cd1c79c41877afea08b6fb\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ed24521cd932d4c0868705817ce9245137b311c52f233ce6070bc1d8c801494b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201985265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2670356312bbb840d7febc1ea21dc5e4918a25688063c071665b3750c5c57fc4\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:afd352e0300c7d4fcdd48a3b0ee053b3a6f1f3be3e8dd47ee68a17be62d779d9\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1191129845},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.888927 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889164 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889438 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889788 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889812 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:43:03 crc kubenswrapper[4782]: E0202 10:43:03.906343 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.505895 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.506038 4782 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25" exitCode=1 Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.506072 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25"} Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.506548 4782 scope.go:117] "RemoveContainer" containerID="18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.507822 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.508306 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.508567 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.509032 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.509570 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.820492 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.821764 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.822197 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.822742 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.823213 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.823604 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.836632 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.836801 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:04 crc kubenswrapper[4782]: E0202 10:43:04.837340 4782 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.837833 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.515939 4782 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6c958a5f5b0c5b0a64b5c2e8839ba9e407ef1a2c983bea4f44d941bfd7ed3dd9" exitCode=0 Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516026 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6c958a5f5b0c5b0a64b5c2e8839ba9e407ef1a2c983bea4f44d941bfd7ed3dd9"} Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5bd6663a15dd44020f8f80ee1c6a99e758e8d0c2617c942ebd4288fdbd3d6c77"} Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516563 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516610 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.517807 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: E0202 10:43:05.517822 4782 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.518560 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.519113 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.519624 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.520065 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.521381 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.521439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3"} Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.522248 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.522622 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.523105 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.523557 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.523857 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532143 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"40ad35bd4d242e0e22102e73d8a02ee88a90963b98e5cd12d0824efcc577f8ef"} Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532667 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2be182a941ac386c312d4a418c453f60fa6a1c72b325472f3f007a986836972c"} Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532684 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a493f4d3dc5e3fdd9c12b7667c0f2ac5904235b90709c24285fac24b62c98fa4"} Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532699 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f59e237bd46bca2e6b43377878fa4e82f3ddb0350ef33698316ec62af34a75ec"} Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.539740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f49c8d1f9ce3432a691c7f218589635842fc347053e803f8273378ac7857984c"} Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.540094 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.540059 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.540117 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:09 crc kubenswrapper[4782]: I0202 10:43:09.838400 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:09 crc kubenswrapper[4782]: I0202 10:43:09.838900 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:09 crc kubenswrapper[4782]: I0202 10:43:09.843521 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.551023 4782 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.576444 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.576477 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.585943 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.659050 4782 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="481e4a35-7272-4866-bd4e-0c00e1a57e4d" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.375338 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.376582 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.376738 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.401039 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.585753 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.587770 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.589747 4782 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="481e4a35-7272-4866-bd4e-0c00e1a57e4d" Feb 02 10:43:22 crc kubenswrapper[4782]: I0202 10:43:22.315855 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 10:43:22 crc kubenswrapper[4782]: I0202 10:43:22.419932 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.353348 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.376568 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.376653 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.392920 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.470064 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.653733 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.754818 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.838134 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.842731 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.089194 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.203517 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.205255 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.223936 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.514460 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.636689 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.727455 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.729162 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.789281 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.898402 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.949999 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.988732 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.033515 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.076354 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.192167 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.200022 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.223383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.371465 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.427032 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.484579 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.525067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.571004 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.721241 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.876728 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.936635 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.937341 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.189274 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.266024 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.371922 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.452508 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.460807 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.489184 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.499173 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.544811 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.634954 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.649269 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.728670 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.855768 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.858572 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.993068 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.032260 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.044218 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.048106 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.128938 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.255858 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.287494 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.309739 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.328274 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.407630 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.433393 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.479937 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.556962 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.570807 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.580265 4782 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.582414 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.582395747 podStartE2EDuration="37.582395747s" podCreationTimestamp="2026-02-02 10:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:43:12.569608527 +0000 UTC m=+272.453801263" watchObservedRunningTime="2026-02-02 10:43:27.582395747 +0000 UTC m=+287.466588463" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.587384 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.587434 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.591676 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.621325 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.621310544 podStartE2EDuration="15.621310544s" podCreationTimestamp="2026-02-02 10:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:43:27.604150377 +0000 UTC m=+287.488343093" watchObservedRunningTime="2026-02-02 10:43:27.621310544 +0000 UTC m=+287.505503260" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.663959 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.689823 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.690352 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.781519 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.850034 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.955734 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.956342 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.986509 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.023746 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.101211 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.144280 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.206387 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.246013 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.247861 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.389655 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.394327 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.549862 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.642278 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.709948 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.715067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.735007 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.762276 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.793795 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.810713 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.871132 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.885942 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.888206 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.967759 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.027504 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.039778 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.062890 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.099845 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.183758 4782 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.211911 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.242506 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.293142 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.327759 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.328224 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.330604 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.398362 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.432162 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.442596 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.496716 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.588169 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.596564 4782 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.644172 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.682931 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.722306 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.781372 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.781372 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.797926 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.816145 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.859963 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.888275 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.914721 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.974284 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.982210 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.982926 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.041923 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.118137 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.124189 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.187743 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.209811 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.228404 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.316035 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.317474 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.344766 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.482895 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.727566 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.750037 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.767319 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.819356 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.844301 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.096023 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.195477 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.319905 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.321276 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.414770 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.428064 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.469866 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.494476 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.501940 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.553589 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.710353 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.720746 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.736675 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.746676 4782 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.751286 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.761465 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.764209 4782 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.797236 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.915041 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.915468 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.995373 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.111489 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.166030 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.206258 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.232021 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.289307 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.307937 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.437126 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.473754 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.532892 4782 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.592957 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.602611 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.607951 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.648837 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.666510 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.691739 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.797443 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.825959 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.915788 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.958091 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.973258 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.008420 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.017764 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.050202 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.060247 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.191334 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.250781 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.370717 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.375987 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376045 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376093 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376759 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376879 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3" gracePeriod=30 Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.402771 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.490252 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.653812 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.695602 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.773071 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.825445 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.879688 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.912130 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.954387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.005862 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.006516 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.112812 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.122407 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.208739 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.274189 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.316894 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.369735 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.458926 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.568095 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.634953 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.699283 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.740713 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.770317 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.857585 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.874485 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.038584 4782 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.039080 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" gracePeriod=5 Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.357340 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.408378 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.516808 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.535416 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.650792 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.667854 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.682347 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.780126 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.780299 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.797035 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.800298 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.818002 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.896329 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.075130 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.170432 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.379086 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.453907 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.484424 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.495509 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.869577 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.916375 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.938461 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.947007 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.949679 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.028090 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.037843 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.042562 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.275841 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.459577 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.483197 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.494798 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.708559 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.801009 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.903393 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.129030 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.149337 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.329853 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.365698 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.400492 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 10:43:39 crc kubenswrapper[4782]: I0202 10:43:39.040424 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 10:43:39 crc kubenswrapper[4782]: I0202 10:43:39.118575 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.138927 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.209987 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.210072 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.274378 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.390976 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391098 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391116 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391200 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.392744 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.392822 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393098 4782 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393117 4782 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393125 4782 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393134 4782 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.409914 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.494103 4782 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.622750 4782 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.781862 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.781930 4782 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" exitCode=137 Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.781980 4782 scope.go:117] "RemoveContainer" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.782080 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.803474 4782 scope.go:117] "RemoveContainer" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" Feb 02 10:43:40 crc kubenswrapper[4782]: E0202 10:43:40.804108 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1\": container with ID starting with 058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1 not found: ID does not exist" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.804199 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1"} err="failed to get container status \"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1\": rpc error: code = NotFound desc = could not find container \"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1\": container with ID starting with 058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1 not found: ID does not exist" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.830471 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.830784 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.846397 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.846679 4782 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f8fefb3a-d7fb-4f51-9ea1-0a686216c819" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.851474 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.851525 4782 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f8fefb3a-d7fb-4f51-9ea1-0a686216c819" Feb 02 10:43:52 crc kubenswrapper[4782]: I0202 10:43:52.847387 4782 generic.go:334] "Generic (PLEG): container finished" podID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" exitCode=0 Feb 02 10:43:52 crc kubenswrapper[4782]: I0202 10:43:52.847485 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerDied","Data":"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0"} Feb 02 10:43:52 crc kubenswrapper[4782]: I0202 10:43:52.848333 4782 scope.go:117] "RemoveContainer" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.614611 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.614703 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.875964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerStarted","Data":"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18"} Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.876270 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.880402 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.954299 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957033 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957180 4782 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3" exitCode=137 Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3"} Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6323010df1c369418604188acdf1da1eb55bdb99241d2d9922d246fe008c4e0"} Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957496 4782 scope.go:117] "RemoveContainer" containerID="18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25" Feb 02 10:44:04 crc kubenswrapper[4782]: I0202 10:44:04.964769 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.375352 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.379212 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.400942 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.405782 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.694481 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.695299 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" containerID="cri-o://9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636" gracePeriod=30 Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.700863 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.701112 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" containerID="cri-o://31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9" gracePeriod=30 Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.059600 4782 generic.go:334] "Generic (PLEG): container finished" podID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerID="9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636" exitCode=0 Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.059954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerDied","Data":"9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636"} Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.062004 4782 generic.go:334] "Generic (PLEG): container finished" podID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerID="31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9" exitCode=0 Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.062051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerDied","Data":"31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9"} Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.232768 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.239781 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269603 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269691 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269750 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269783 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269815 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269840 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269880 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.271099 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.272055 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.272883 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.273595 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config" (OuterVolumeSpecName: "config") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.274769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config" (OuterVolumeSpecName: "config") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.274092 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca" (OuterVolumeSpecName: "client-ca") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.275019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.279830 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw" (OuterVolumeSpecName: "kube-api-access-hd8hw") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "kube-api-access-hd8hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.283874 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd" (OuterVolumeSpecName: "kube-api-access-886qd") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "kube-api-access-886qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.285061 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373754 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373799 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373817 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373831 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373842 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373852 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373862 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373875 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756016 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756247 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756259 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756274 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756280 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756290 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerName="installer" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756296 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerName="installer" Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756306 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756312 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756452 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756467 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756482 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerName="installer" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756494 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756928 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.769606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778186 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778406 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778520 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778605 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.879382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.879823 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.880697 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.880729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.880758 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.881305 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.881486 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.882680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.885771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.899182 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.068527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerDied","Data":"6d2418109eeeba4b0106f128a727272504228d5ce1b9780ebff9ed573127420d"} Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.068896 4782 scope.go:117] "RemoveContainer" containerID="31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.068546 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.070708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerDied","Data":"944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0"} Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.070751 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.077967 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.085387 4782 scope.go:117] "RemoveContainer" containerID="9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.099911 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.105190 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.127177 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.145879 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.493546 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:22 crc kubenswrapper[4782]: W0202 10:44:22.502627 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f6cd214_bca2_452a_825d_cf4b07972e83.slice/crio-570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7 WatchSource:0}: Error finding container 570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7: Status 404 returned error can't find the container with id 570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7 Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.762106 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.762986 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.765591 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.765755 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.766249 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.766324 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.767531 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.767544 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.781745 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789002 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9cm\" (UniqueName: \"kubernetes.io/projected/46e9aaad-cb96-4f13-bc66-f88eacc38399-kube-api-access-8z9cm\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46e9aaad-cb96-4f13-bc66-f88eacc38399-serving-cert\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789169 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-config\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789197 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-client-ca\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.836745 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" path="/var/lib/kubelet/pods/1e30f31e-9e81-4b3f-a680-a84918f9e7ec/volumes" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.837514 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" path="/var/lib/kubelet/pods/46d69997-45d2-4fc5-97fe-324abd43be7c/volumes" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890846 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46e9aaad-cb96-4f13-bc66-f88eacc38399-serving-cert\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890923 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-config\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890952 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-client-ca\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890983 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z9cm\" (UniqueName: \"kubernetes.io/projected/46e9aaad-cb96-4f13-bc66-f88eacc38399-kube-api-access-8z9cm\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.892541 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-client-ca\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.893277 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-config\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.898869 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46e9aaad-cb96-4f13-bc66-f88eacc38399-serving-cert\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.922849 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z9cm\" (UniqueName: \"kubernetes.io/projected/46e9aaad-cb96-4f13-bc66-f88eacc38399-kube-api-access-8z9cm\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.077462 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerStarted","Data":"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731"} Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.077517 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerStarted","Data":"570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7"} Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.077963 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.083005 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.085717 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.116471 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" podStartSLOduration=3.116450823 podStartE2EDuration="3.116450823s" podCreationTimestamp="2026-02-02 10:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:23.111654353 +0000 UTC m=+342.995847069" watchObservedRunningTime="2026-02-02 10:44:23.116450823 +0000 UTC m=+343.000643539" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.398605 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht"] Feb 02 10:44:24 crc kubenswrapper[4782]: I0202 10:44:24.087781 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" event={"ID":"46e9aaad-cb96-4f13-bc66-f88eacc38399","Type":"ContainerStarted","Data":"95aa230da32bcb1bf08dbbd63fdafe110f519061353eb56e2e29745ccbbdc49c"} Feb 02 10:44:24 crc kubenswrapper[4782]: I0202 10:44:24.088161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" event={"ID":"46e9aaad-cb96-4f13-bc66-f88eacc38399","Type":"ContainerStarted","Data":"2f0d5845b18f9b34db9716ca13cd2e1236c9dbb6b300d2b1ce8ec998fb6301b9"} Feb 02 10:44:24 crc kubenswrapper[4782]: I0202 10:44:24.110476 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" podStartSLOduration=4.11045906 podStartE2EDuration="4.11045906s" podCreationTimestamp="2026-02-02 10:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:24.109123151 +0000 UTC m=+343.993315867" watchObservedRunningTime="2026-02-02 10:44:24.11045906 +0000 UTC m=+343.994651776" Feb 02 10:44:25 crc kubenswrapper[4782]: I0202 10:44:25.092203 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:25 crc kubenswrapper[4782]: I0202 10:44:25.096408 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:35 crc kubenswrapper[4782]: I0202 10:44:35.831055 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:44:35 crc kubenswrapper[4782]: I0202 10:44:35.832079 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" containerID="cri-o://6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0" gracePeriod=2 Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.155714 4782 generic.go:334] "Generic (PLEG): container finished" podID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerID="6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0" exitCode=0 Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.155802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0"} Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.290941 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.482192 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.482299 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.482394 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.483496 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities" (OuterVolumeSpecName: "utilities") pod "213698f8-d1b6-489f-8fc4-a69583d4fc2e" (UID: "213698f8-d1b6-489f-8fc4-a69583d4fc2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.490947 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9" (OuterVolumeSpecName: "kube-api-access-h72b9") pod "213698f8-d1b6-489f-8fc4-a69583d4fc2e" (UID: "213698f8-d1b6-489f-8fc4-a69583d4fc2e"). InnerVolumeSpecName "kube-api-access-h72b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.584354 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.584387 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.601651 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "213698f8-d1b6-489f-8fc4-a69583d4fc2e" (UID: "213698f8-d1b6-489f-8fc4-a69583d4fc2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.687133 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.163475 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"c20f4c43562c9a26701d05b9c48459ab9215c0f89e3d7636a6006f20e7c4c9aa"} Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.163526 4782 scope.go:117] "RemoveContainer" containerID="6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.164709 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.180163 4782 scope.go:117] "RemoveContainer" containerID="66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.181567 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.186564 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.200126 4782 scope.go:117] "RemoveContainer" containerID="e85622bd784d09d56836a615239db13244c2d5b26841db53fd14e1ec1665771e" Feb 02 10:44:38 crc kubenswrapper[4782]: I0202 10:44:38.556888 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:38 crc kubenswrapper[4782]: I0202 10:44:38.557919 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" containerID="cri-o://f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" gracePeriod=30 Feb 02 10:44:38 crc kubenswrapper[4782]: I0202 10:44:38.828123 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" path="/var/lib/kubelet/pods/213698f8-d1b6-489f-8fc4-a69583d4fc2e/volumes" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.061936 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128365 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128425 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128485 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128511 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128532 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.129249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.129296 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config" (OuterVolumeSpecName: "config") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.130172 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.133082 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm" (OuterVolumeSpecName: "kube-api-access-6grgm") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "kube-api-access-6grgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.134130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175724 4782 generic.go:334] "Generic (PLEG): container finished" podID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" exitCode=0 Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175789 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerDied","Data":"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731"} Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175828 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerDied","Data":"570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7"} Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175855 4782 scope.go:117] "RemoveContainer" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175993 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.193923 4782 scope.go:117] "RemoveContainer" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.194264 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731\": container with ID starting with f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731 not found: ID does not exist" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.194306 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731"} err="failed to get container status \"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731\": rpc error: code = NotFound desc = could not find container \"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731\": container with ID starting with f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731 not found: ID does not exist" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.207079 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.210589 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230035 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230070 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230084 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230095 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230107 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.770853 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26"] Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772252 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-content" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.772331 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-content" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772393 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-utilities" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.772504 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-utilities" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772615 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.772728 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772983 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.773044 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.773206 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.773282 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.774306 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777095 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777168 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777196 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777238 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.778392 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.787431 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.793423 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26"] Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837737 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-proxy-ca-bundles\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837848 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f7507c-7ecb-41de-9fd5-937b5961db89-serving-cert\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837908 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-config\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837956 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-client-ca\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.838005 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwh4\" (UniqueName: \"kubernetes.io/projected/81f7507c-7ecb-41de-9fd5-937b5961db89-kube-api-access-ggwh4\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939171 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-client-ca\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939310 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwh4\" (UniqueName: \"kubernetes.io/projected/81f7507c-7ecb-41de-9fd5-937b5961db89-kube-api-access-ggwh4\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-proxy-ca-bundles\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f7507c-7ecb-41de-9fd5-937b5961db89-serving-cert\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939539 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-config\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.940257 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-client-ca\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.940538 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-proxy-ca-bundles\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.942236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-config\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.958008 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f7507c-7ecb-41de-9fd5-937b5961db89-serving-cert\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.962494 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwh4\" (UniqueName: \"kubernetes.io/projected/81f7507c-7ecb-41de-9fd5-937b5961db89-kube-api-access-ggwh4\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:40 crc kubenswrapper[4782]: I0202 10:44:40.090382 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:40 crc kubenswrapper[4782]: I0202 10:44:40.530699 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26"] Feb 02 10:44:40 crc kubenswrapper[4782]: I0202 10:44:40.829915 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" path="/var/lib/kubelet/pods/8f6cd214-bca2-452a-825d-cf4b07972e83/volumes" Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.222053 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" event={"ID":"81f7507c-7ecb-41de-9fd5-937b5961db89","Type":"ContainerStarted","Data":"d28a38552b32912efe369435160fbb3919332e8ea0da6319eb5d16dda7efed1f"} Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.222773 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.222798 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" event={"ID":"81f7507c-7ecb-41de-9fd5-937b5961db89","Type":"ContainerStarted","Data":"f64721fccd7d689be4ab56aa0504d2d4f511727958f3c9c2c4660bc300d39cc5"} Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.227563 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.247253 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" podStartSLOduration=3.247234778 podStartE2EDuration="3.247234778s" podCreationTimestamp="2026-02-02 10:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:41.244316743 +0000 UTC m=+361.128509459" watchObservedRunningTime="2026-02-02 10:44:41.247234778 +0000 UTC m=+361.131427484" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.240794 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nc9"] Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.242195 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.271372 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nc9"] Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-registry-tls\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396293 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8584c52-c370-4f38-9965-6938b9cd2892-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-trusted-ca\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhnk\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-kube-api-access-6jhnk\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396378 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-registry-certificates\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396400 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-bound-sa-token\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396426 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8584c52-c370-4f38-9965-6938b9cd2892-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.424261 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8584c52-c370-4f38-9965-6938b9cd2892-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499603 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-trusted-ca\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499656 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhnk\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-kube-api-access-6jhnk\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-registry-certificates\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499727 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-bound-sa-token\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499976 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8584c52-c370-4f38-9965-6938b9cd2892-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.500081 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8584c52-c370-4f38-9965-6938b9cd2892-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.501236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-registry-certificates\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.501321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-registry-tls\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.503181 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-trusted-ca\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.509597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-registry-tls\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.509603 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8584c52-c370-4f38-9965-6938b9cd2892-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.520507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhnk\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-kube-api-access-6jhnk\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.522280 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-bound-sa-token\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.560237 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.040428 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nc9"] Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.241995 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" event={"ID":"f8584c52-c370-4f38-9965-6938b9cd2892","Type":"ContainerStarted","Data":"9850426c326ae10a179fd5d9dbc57b29cf75a1754cb725c0c324ae48e8e1d56a"} Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.242280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" event={"ID":"f8584c52-c370-4f38-9965-6938b9cd2892","Type":"ContainerStarted","Data":"29287eafff5fb685e0991101d7901dc013c298b37742e02278ffb2d1ba8fedbc"} Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.242662 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.274830 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" podStartSLOduration=1.2748101090000001 podStartE2EDuration="1.274810109s" podCreationTimestamp="2026-02-02 10:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:45.261717958 +0000 UTC m=+365.145910674" watchObservedRunningTime="2026-02-02 10:44:45.274810109 +0000 UTC m=+365.159002835" Feb 02 10:44:52 crc kubenswrapper[4782]: I0202 10:44:52.951478 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:44:52 crc kubenswrapper[4782]: I0202 10:44:52.952112 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.201714 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.203231 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.206541 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.207052 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.224294 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.248348 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.253736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.254051 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.355095 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.355684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.355877 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.356957 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.374088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.374132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.521927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.958741 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 10:45:00 crc kubenswrapper[4782]: W0202 10:45:00.964701 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd9b99a_c3f7_4153_b2ac_769ca0ba88aa.slice/crio-db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf WatchSource:0}: Error finding container db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf: Status 404 returned error can't find the container with id db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf Feb 02 10:45:01 crc kubenswrapper[4782]: I0202 10:45:01.332932 4782 generic.go:334] "Generic (PLEG): container finished" podID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerID="dca388f48a923df889d89ab8317f39bb415b2f6f2849925bcccbaf4b1c7171f9" exitCode=0 Feb 02 10:45:01 crc kubenswrapper[4782]: I0202 10:45:01.333100 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" event={"ID":"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa","Type":"ContainerDied","Data":"dca388f48a923df889d89ab8317f39bb415b2f6f2849925bcccbaf4b1c7171f9"} Feb 02 10:45:01 crc kubenswrapper[4782]: I0202 10:45:01.333227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" event={"ID":"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa","Type":"ContainerStarted","Data":"db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf"} Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.658476 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.787206 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.788071 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.788142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.788005 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" (UID: "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.792751 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" (UID: "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.794491 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t" (OuterVolumeSpecName: "kube-api-access-v558t") pod "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" (UID: "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa"). InnerVolumeSpecName "kube-api-access-v558t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.889435 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.889487 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.889502 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:03 crc kubenswrapper[4782]: I0202 10:45:03.352696 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" event={"ID":"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa","Type":"ContainerDied","Data":"db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf"} Feb 02 10:45:03 crc kubenswrapper[4782]: I0202 10:45:03.352761 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf" Feb 02 10:45:03 crc kubenswrapper[4782]: I0202 10:45:03.352811 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:04 crc kubenswrapper[4782]: I0202 10:45:04.566303 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:45:04 crc kubenswrapper[4782]: I0202 10:45:04.621086 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.422935 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.423689 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8vzzf" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" containerID="cri-o://05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.442910 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.446293 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxwg2" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" containerID="cri-o://d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.469383 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.469698 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" containerID="cri-o://7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.474626 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.474932 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8tk99" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" containerID="cri-o://27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.487953 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wv9v8"] Feb 02 10:45:22 crc kubenswrapper[4782]: E0202 10:45:22.488251 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.488273 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.488413 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.488952 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.495138 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.495343 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" containerID="cri-o://e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.503154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wv9v8"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.541508 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.541727 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.541782 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zbz\" (UniqueName: \"kubernetes.io/projected/a044a9d0-6c97-46c4-980a-e5d9940e9f74-kube-api-access-j6zbz\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.657247 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.657320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.657358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zbz\" (UniqueName: \"kubernetes.io/projected/a044a9d0-6c97-46c4-980a-e5d9940e9f74-kube-api-access-j6zbz\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.659373 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.670489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.676306 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zbz\" (UniqueName: \"kubernetes.io/projected/a044a9d0-6c97-46c4-980a-e5d9940e9f74-kube-api-access-j6zbz\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.870054 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.889154 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.951358 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.951422 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.062176 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"10039944-73fc-417b-925f-48a2985c277d\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.062285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"10039944-73fc-417b-925f-48a2985c277d\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.062336 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"10039944-73fc-417b-925f-48a2985c277d\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.063201 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities" (OuterVolumeSpecName: "utilities") pod "10039944-73fc-417b-925f-48a2985c277d" (UID: "10039944-73fc-417b-925f-48a2985c277d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.081089 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz" (OuterVolumeSpecName: "kube-api-access-svwgz") pod "10039944-73fc-417b-925f-48a2985c277d" (UID: "10039944-73fc-417b-925f-48a2985c277d"). InnerVolumeSpecName "kube-api-access-svwgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.127173 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.142502 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.143232 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.144518 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.164137 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.164414 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.165464 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10039944-73fc-417b-925f-48a2985c277d" (UID: "10039944-73fc-417b-925f-48a2985c277d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265064 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"83c24a27-fdbe-468f-b4cf-780c87b598ae\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265126 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265148 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265177 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"9beb5599-8c2d-4493-9561-cc2781d32052\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265200 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"9beb5599-8c2d-4493-9561-cc2781d32052\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265223 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"d9a718cd-1b6d-483f-b995-938331c7e00e\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265268 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"d9a718cd-1b6d-483f-b995-938331c7e00e\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265292 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"83c24a27-fdbe-468f-b4cf-780c87b598ae\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265318 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"9beb5599-8c2d-4493-9561-cc2781d32052\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265351 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"d9a718cd-1b6d-483f-b995-938331c7e00e\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265369 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"83c24a27-fdbe-468f-b4cf-780c87b598ae\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265556 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.266067 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "83c24a27-fdbe-468f-b4cf-780c87b598ae" (UID: "83c24a27-fdbe-468f-b4cf-780c87b598ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.267347 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities" (OuterVolumeSpecName: "utilities") pod "d9a718cd-1b6d-483f-b995-938331c7e00e" (UID: "d9a718cd-1b6d-483f-b995-938331c7e00e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.270049 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities" (OuterVolumeSpecName: "utilities") pod "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" (UID: "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.271620 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "83c24a27-fdbe-468f-b4cf-780c87b598ae" (UID: "83c24a27-fdbe-468f-b4cf-780c87b598ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.271628 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities" (OuterVolumeSpecName: "utilities") pod "9beb5599-8c2d-4493-9561-cc2781d32052" (UID: "9beb5599-8c2d-4493-9561-cc2781d32052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.274377 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg" (OuterVolumeSpecName: "kube-api-access-zlmlg") pod "9beb5599-8c2d-4493-9561-cc2781d32052" (UID: "9beb5599-8c2d-4493-9561-cc2781d32052"). InnerVolumeSpecName "kube-api-access-zlmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.276540 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8" (OuterVolumeSpecName: "kube-api-access-9mqs8") pod "d9a718cd-1b6d-483f-b995-938331c7e00e" (UID: "d9a718cd-1b6d-483f-b995-938331c7e00e"). InnerVolumeSpecName "kube-api-access-9mqs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.290599 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q" (OuterVolumeSpecName: "kube-api-access-kf98q") pod "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" (UID: "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde"). InnerVolumeSpecName "kube-api-access-kf98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.297386 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849" (OuterVolumeSpecName: "kube-api-access-cv849") pod "83c24a27-fdbe-468f-b4cf-780c87b598ae" (UID: "83c24a27-fdbe-468f-b4cf-780c87b598ae"). InnerVolumeSpecName "kube-api-access-cv849". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.312340 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9beb5599-8c2d-4493-9561-cc2781d32052" (UID: "9beb5599-8c2d-4493-9561-cc2781d32052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.333487 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" (UID: "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366376 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366405 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366414 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366423 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366431 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366440 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366448 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366459 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366467 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366475 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366484 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.384400 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9a718cd-1b6d-483f-b995-938331c7e00e" (UID: "d9a718cd-1b6d-483f-b995-938331c7e00e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.438445 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wv9v8"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.467856 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476084 4782 generic.go:334] "Generic (PLEG): container finished" podID="9beb5599-8c2d-4493-9561-cc2781d32052" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476159 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476190 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"568ce8fc0d55d9c475927a100a13079ea3c32843e1f085a43192f2b40f052173"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476207 4782 scope.go:117] "RemoveContainer" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476318 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490513 4782 generic.go:334] "Generic (PLEG): container finished" podID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490618 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerDied","Data":"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490668 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerDied","Data":"01e32062d069a57210dfb3c4675630b56cd608a941ddd39a5505e8107646b05b"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490759 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.504767 4782 scope.go:117] "RemoveContainer" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506242 4782 generic.go:334] "Generic (PLEG): container finished" podID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506314 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506322 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"2157f695d84a6bf7a7c1d517b9438fd49370964e625a05cdc501c630222fe141"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.511532 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" event={"ID":"a044a9d0-6c97-46c4-980a-e5d9940e9f74","Type":"ContainerStarted","Data":"e304c06aa518c2e00ee4a2c8b84b1d6d99652e36ab28422e86846e294cf20f53"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530699 4782 generic.go:334] "Generic (PLEG): container finished" podID="10039944-73fc-417b-925f-48a2985c277d" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530779 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530829 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530938 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.536718 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544297 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544362 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544422 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.546228 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.546238 4782 scope.go:117] "RemoveContainer" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.549609 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.561554 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.582059 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.586289 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.588797 4782 scope.go:117] "RemoveContainer" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.590990 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59\": container with ID starting with 27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59 not found: ID does not exist" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.591054 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59"} err="failed to get container status \"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59\": rpc error: code = NotFound desc = could not find container \"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59\": container with ID starting with 27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.591089 4782 scope.go:117] "RemoveContainer" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.593542 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5\": container with ID starting with 47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5 not found: ID does not exist" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.593586 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5"} err="failed to get container status \"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5\": rpc error: code = NotFound desc = could not find container \"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5\": container with ID starting with 47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.593619 4782 scope.go:117] "RemoveContainer" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.594684 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1\": container with ID starting with 26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1 not found: ID does not exist" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.594707 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1"} err="failed to get container status \"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1\": rpc error: code = NotFound desc = could not find container \"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1\": container with ID starting with 26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.594723 4782 scope.go:117] "RemoveContainer" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.595461 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.607195 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.643365 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.643798 4782 scope.go:117] "RemoveContainer" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.649258 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.663170 4782 scope.go:117] "RemoveContainer" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.664007 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18\": container with ID starting with 7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18 not found: ID does not exist" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664043 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18"} err="failed to get container status \"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18\": rpc error: code = NotFound desc = could not find container \"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18\": container with ID starting with 7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664068 4782 scope.go:117] "RemoveContainer" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.664337 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0\": container with ID starting with 7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0 not found: ID does not exist" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664375 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0"} err="failed to get container status \"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0\": rpc error: code = NotFound desc = could not find container \"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0\": container with ID starting with 7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664395 4782 scope.go:117] "RemoveContainer" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.677662 4782 scope.go:117] "RemoveContainer" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.704161 4782 scope.go:117] "RemoveContainer" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.716331 4782 scope.go:117] "RemoveContainer" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.718017 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5\": container with ID starting with 05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5 not found: ID does not exist" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718054 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5"} err="failed to get container status \"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5\": rpc error: code = NotFound desc = could not find container \"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5\": container with ID starting with 05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718076 4782 scope.go:117] "RemoveContainer" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.718386 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd\": container with ID starting with 00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd not found: ID does not exist" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718411 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd"} err="failed to get container status \"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd\": rpc error: code = NotFound desc = could not find container \"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd\": container with ID starting with 00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718424 4782 scope.go:117] "RemoveContainer" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.718656 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87\": container with ID starting with a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87 not found: ID does not exist" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718676 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87"} err="failed to get container status \"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87\": rpc error: code = NotFound desc = could not find container \"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87\": container with ID starting with a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718687 4782 scope.go:117] "RemoveContainer" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.731946 4782 scope.go:117] "RemoveContainer" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.745766 4782 scope.go:117] "RemoveContainer" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.766579 4782 scope.go:117] "RemoveContainer" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.767124 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4\": container with ID starting with d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4 not found: ID does not exist" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767164 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4"} err="failed to get container status \"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4\": rpc error: code = NotFound desc = could not find container \"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4\": container with ID starting with d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767190 4782 scope.go:117] "RemoveContainer" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.767464 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7\": container with ID starting with c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7 not found: ID does not exist" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767494 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7"} err="failed to get container status \"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7\": rpc error: code = NotFound desc = could not find container \"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7\": container with ID starting with c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767514 4782 scope.go:117] "RemoveContainer" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.767814 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e\": container with ID starting with 9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e not found: ID does not exist" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767838 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e"} err="failed to get container status \"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e\": rpc error: code = NotFound desc = could not find container \"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e\": container with ID starting with 9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767852 4782 scope.go:117] "RemoveContainer" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.792199 4782 scope.go:117] "RemoveContainer" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.806433 4782 scope.go:117] "RemoveContainer" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.824037 4782 scope.go:117] "RemoveContainer" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.825706 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7\": container with ID starting with e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7 not found: ID does not exist" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.826365 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7"} err="failed to get container status \"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7\": rpc error: code = NotFound desc = could not find container \"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7\": container with ID starting with e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.826448 4782 scope.go:117] "RemoveContainer" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.827950 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469\": container with ID starting with 46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469 not found: ID does not exist" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.827983 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469"} err="failed to get container status \"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469\": rpc error: code = NotFound desc = could not find container \"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469\": container with ID starting with 46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.828031 4782 scope.go:117] "RemoveContainer" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.828250 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a\": container with ID starting with 79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a not found: ID does not exist" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.828279 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a"} err="failed to get container status \"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a\": rpc error: code = NotFound desc = could not find container \"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a\": container with ID starting with 79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a not found: ID does not exist" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.239399 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vnt75"] Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240109 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240191 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240270 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240328 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240397 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240460 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240518 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240617 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240717 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240775 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240835 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240913 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240982 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241394 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241455 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241522 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241583 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241664 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241737 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241902 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241985 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242041 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.242098 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242159 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.242222 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242276 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.242339 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242398 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242547 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242615 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242705 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242766 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242825 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242905 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.244026 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.249123 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnt75"] Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.249177 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.386292 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-catalog-content\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.386380 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9l7\" (UniqueName: \"kubernetes.io/projected/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-kube-api-access-mz9l7\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.386422 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-utilities\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.487995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-catalog-content\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.488661 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-catalog-content\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.488896 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9l7\" (UniqueName: \"kubernetes.io/projected/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-kube-api-access-mz9l7\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.489060 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-utilities\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.489425 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-utilities\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.507955 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9l7\" (UniqueName: \"kubernetes.io/projected/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-kube-api-access-mz9l7\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.550266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" event={"ID":"a044a9d0-6c97-46c4-980a-e5d9940e9f74","Type":"ContainerStarted","Data":"ad9b3800da8e1c1a2fe4f015cda465403fab8ed21fbb019ddbe39b3cb0e75736"} Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.550526 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.555697 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.561141 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.569930 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" podStartSLOduration=2.569912184 podStartE2EDuration="2.569912184s" podCreationTimestamp="2026-02-02 10:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:45:24.569034208 +0000 UTC m=+404.453226934" watchObservedRunningTime="2026-02-02 10:45:24.569912184 +0000 UTC m=+404.454104920" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.812988 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnt75"] Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.830285 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10039944-73fc-417b-925f-48a2985c277d" path="/var/lib/kubelet/pods/10039944-73fc-417b-925f-48a2985c277d/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.830982 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" path="/var/lib/kubelet/pods/83c24a27-fdbe-468f-b4cf-780c87b598ae/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.831450 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" path="/var/lib/kubelet/pods/9beb5599-8c2d-4493-9561-cc2781d32052/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.832900 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" path="/var/lib/kubelet/pods/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.833548 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" path="/var/lib/kubelet/pods/d9a718cd-1b6d-483f-b995-938331c7e00e/volumes" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.557219 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g864k"] Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.566581 4782 generic.go:334] "Generic (PLEG): container finished" podID="c80d3e09-03c8-40f0-a4dd-474da2b5d31d" containerID="61f516602f3731fe476be806c116fc90e024d8989ff3f7d5fed5a62cf9542b16" exitCode=0 Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573383 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerDied","Data":"61f516602f3731fe476be806c116fc90e024d8989ff3f7d5fed5a62cf9542b16"} Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573426 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g864k"] Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573448 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerStarted","Data":"ff5089fa53f99c39368573b21860998154a9938a6ddfbd8ae9c816812abef8f9"} Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573556 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.578197 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.705692 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssss\" (UniqueName: \"kubernetes.io/projected/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-kube-api-access-jssss\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.705746 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-catalog-content\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.705865 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-utilities\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.806426 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-utilities\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.806498 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jssss\" (UniqueName: \"kubernetes.io/projected/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-kube-api-access-jssss\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.806529 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-catalog-content\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.807024 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-utilities\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.807141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-catalog-content\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.849065 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jssss\" (UniqueName: \"kubernetes.io/projected/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-kube-api-access-jssss\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.900579 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.364862 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g864k"] Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.438840 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h2hxh"] Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.440715 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.442974 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.476001 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2hxh"] Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.577301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerStarted","Data":"afeaf889e775c84c90e62264625391eafe4337fd0e7df118964f365a29a6f1bc"} Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.580684 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerStarted","Data":"fc52609a3149da16c5a38619b760aa6093547c958583a21e9f45474fe3f71d2b"} Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.616179 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-catalog-content\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.616269 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-utilities\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.616322 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6h4g\" (UniqueName: \"kubernetes.io/projected/fe57942f-8b6f-4400-8ed5-6fb054a514bf-kube-api-access-z6h4g\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717229 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-catalog-content\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717543 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-utilities\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717674 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6h4g\" (UniqueName: \"kubernetes.io/projected/fe57942f-8b6f-4400-8ed5-6fb054a514bf-kube-api-access-z6h4g\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717716 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-catalog-content\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.718614 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-utilities\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.736877 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6h4g\" (UniqueName: \"kubernetes.io/projected/fe57942f-8b6f-4400-8ed5-6fb054a514bf-kube-api-access-z6h4g\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.770038 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.213647 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2hxh"] Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.586198 4782 generic.go:334] "Generic (PLEG): container finished" podID="fe57942f-8b6f-4400-8ed5-6fb054a514bf" containerID="f9e4261b90f87b00dd611e23eb7f14a088abac96b23ddc99dd09cf3667f26c8a" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.586303 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerDied","Data":"f9e4261b90f87b00dd611e23eb7f14a088abac96b23ddc99dd09cf3667f26c8a"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.586623 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerStarted","Data":"2787194a0614bc3148c3f8072417b5f370332a751d21f01a014fe0bfe3996685"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.588333 4782 generic.go:334] "Generic (PLEG): container finished" podID="c80d3e09-03c8-40f0-a4dd-474da2b5d31d" containerID="fc52609a3149da16c5a38619b760aa6093547c958583a21e9f45474fe3f71d2b" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.588384 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerDied","Data":"fc52609a3149da16c5a38619b760aa6093547c958583a21e9f45474fe3f71d2b"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.589827 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e046c0e-cea4-45b0-8952-1fc5edb01ff5" containerID="8611d0e0a18f4a738d6f6d216f194b7b2b1c4072e2a2b0191656b491c454306f" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.589851 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerDied","Data":"8611d0e0a18f4a738d6f6d216f194b7b2b1c4072e2a2b0191656b491c454306f"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.845205 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qsk6j"] Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.848738 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.851208 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.852609 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qsk6j"] Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.036062 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrjg\" (UniqueName: \"kubernetes.io/projected/a435172a-875e-47e1-8c17-fad9fe2a0baf-kube-api-access-thrjg\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.036344 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-catalog-content\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.036468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-utilities\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138089 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-catalog-content\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-utilities\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138792 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrjg\" (UniqueName: \"kubernetes.io/projected/a435172a-875e-47e1-8c17-fad9fe2a0baf-kube-api-access-thrjg\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-catalog-content\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.140278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-utilities\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.167472 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrjg\" (UniqueName: \"kubernetes.io/projected/a435172a-875e-47e1-8c17-fad9fe2a0baf-kube-api-access-thrjg\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.176145 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.597315 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e046c0e-cea4-45b0-8952-1fc5edb01ff5" containerID="0f87ef34a4c142ee9578627591df9988cc347d3787bbd82c2fe49f783a811331" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.597408 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerDied","Data":"0f87ef34a4c142ee9578627591df9988cc347d3787bbd82c2fe49f783a811331"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.611087 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerStarted","Data":"c25a1063092a0a0420687ef450e7dc52ad1df8a8413a4d89dee1f84103f7cabf"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.619368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerStarted","Data":"efbad1331c2e169e99b07c3e8d7e0fbdb4d636fc3efabac4cbf098cbb5737308"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.659756 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qsk6j"] Feb 02 10:45:28 crc kubenswrapper[4782]: W0202 10:45:28.662494 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda435172a_875e_47e1_8c17_fad9fe2a0baf.slice/crio-05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47 WatchSource:0}: Error finding container 05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47: Status 404 returned error can't find the container with id 05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.669041 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vnt75" podStartSLOduration=2.099065735 podStartE2EDuration="4.669025696s" podCreationTimestamp="2026-02-02 10:45:24 +0000 UTC" firstStartedPulling="2026-02-02 10:45:25.569321495 +0000 UTC m=+405.453514211" lastFinishedPulling="2026-02-02 10:45:28.139281456 +0000 UTC m=+408.023474172" observedRunningTime="2026-02-02 10:45:28.667327376 +0000 UTC m=+408.551520092" watchObservedRunningTime="2026-02-02 10:45:28.669025696 +0000 UTC m=+408.553218402" Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.625780 4782 generic.go:334] "Generic (PLEG): container finished" podID="fe57942f-8b6f-4400-8ed5-6fb054a514bf" containerID="c25a1063092a0a0420687ef450e7dc52ad1df8a8413a4d89dee1f84103f7cabf" exitCode=0 Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.625874 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerDied","Data":"c25a1063092a0a0420687ef450e7dc52ad1df8a8413a4d89dee1f84103f7cabf"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.627523 4782 generic.go:334] "Generic (PLEG): container finished" podID="a435172a-875e-47e1-8c17-fad9fe2a0baf" containerID="84b8ee0c8c9e2b8ec98cf705db5b59c5234a4d37ef17d79b9bc5c6142f49f253" exitCode=0 Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.627598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerDied","Data":"84b8ee0c8c9e2b8ec98cf705db5b59c5234a4d37ef17d79b9bc5c6142f49f253"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.627620 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerStarted","Data":"05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.631782 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerStarted","Data":"c63e40b9e6d47fcb72990ea699c85cc90b96c9ad32d91f1259e6471f557a5f90"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.676800 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g864k" podStartSLOduration=2.8348502399999997 podStartE2EDuration="4.676776291s" podCreationTimestamp="2026-02-02 10:45:25 +0000 UTC" firstStartedPulling="2026-02-02 10:45:27.590973484 +0000 UTC m=+407.475166200" lastFinishedPulling="2026-02-02 10:45:29.432899535 +0000 UTC m=+409.317092251" observedRunningTime="2026-02-02 10:45:29.672750123 +0000 UTC m=+409.556942849" watchObservedRunningTime="2026-02-02 10:45:29.676776291 +0000 UTC m=+409.560969007" Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.696343 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" containerID="cri-o://9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642" gracePeriod=30 Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.638023 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerStarted","Data":"d97812050a5a3a18f199c753639f735caa0c1f383b60507a26e47f8e24a93519"} Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.640061 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerStarted","Data":"b669957a2b5cac24387adaca579bdd46f439a001c4236c44963844410702fef3"} Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.642493 4782 generic.go:334] "Generic (PLEG): container finished" podID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerID="9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642" exitCode=0 Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.643074 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerDied","Data":"9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642"} Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.669837 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h2hxh" podStartSLOduration=2.156567142 podStartE2EDuration="4.669820375s" podCreationTimestamp="2026-02-02 10:45:26 +0000 UTC" firstStartedPulling="2026-02-02 10:45:27.587800201 +0000 UTC m=+407.471992917" lastFinishedPulling="2026-02-02 10:45:30.101053434 +0000 UTC m=+409.985246150" observedRunningTime="2026-02-02 10:45:30.666714534 +0000 UTC m=+410.550907250" watchObservedRunningTime="2026-02-02 10:45:30.669820375 +0000 UTC m=+410.554013091" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.704164 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.872937 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873332 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873376 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873419 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873441 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873639 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873685 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.874205 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.874322 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.879318 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.879925 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp" (OuterVolumeSpecName: "kube-api-access-f5khp") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "kube-api-access-f5khp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.880297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.886011 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.904387 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.910077 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974804 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974855 4782 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974870 4782 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974882 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974893 4782 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974903 4782 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974913 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.649289 4782 generic.go:334] "Generic (PLEG): container finished" podID="a435172a-875e-47e1-8c17-fad9fe2a0baf" containerID="b669957a2b5cac24387adaca579bdd46f439a001c4236c44963844410702fef3" exitCode=0 Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.649391 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerDied","Data":"b669957a2b5cac24387adaca579bdd46f439a001c4236c44963844410702fef3"} Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.651938 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.652025 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerDied","Data":"a55c72e5f15ff42bfcfbbd5f83cbfe22e092ae45221bb6158bb15a9d235221ed"} Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.652077 4782 scope.go:117] "RemoveContainer" containerID="9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642" Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.689812 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.695997 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:45:32 crc kubenswrapper[4782]: I0202 10:45:32.658299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerStarted","Data":"9ab967490199a05243dc87fa6db99670429507a31ad469d631692722f93b54e6"} Feb 02 10:45:32 crc kubenswrapper[4782]: I0202 10:45:32.676405 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qsk6j" podStartSLOduration=3.179938633 podStartE2EDuration="5.676388564s" podCreationTimestamp="2026-02-02 10:45:27 +0000 UTC" firstStartedPulling="2026-02-02 10:45:29.630148707 +0000 UTC m=+409.514341423" lastFinishedPulling="2026-02-02 10:45:32.126598638 +0000 UTC m=+412.010791354" observedRunningTime="2026-02-02 10:45:32.674077286 +0000 UTC m=+412.558270012" watchObservedRunningTime="2026-02-02 10:45:32.676388564 +0000 UTC m=+412.560581280" Feb 02 10:45:32 crc kubenswrapper[4782]: I0202 10:45:32.827795 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" path="/var/lib/kubelet/pods/4877e80d-a6fe-4503-a64c-398815efa1e0/volumes" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.561329 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.561692 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.606004 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.721498 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:35 crc kubenswrapper[4782]: I0202 10:45:35.901355 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:35 crc kubenswrapper[4782]: I0202 10:45:35.902283 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:35 crc kubenswrapper[4782]: I0202 10:45:35.952606 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.716127 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.771080 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.771150 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.808236 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:37 crc kubenswrapper[4782]: I0202 10:45:37.728656 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.177352 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.177404 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.216523 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.728131 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.951021 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.951601 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.951665 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.952234 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.952293 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25" gracePeriod=600 Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.758788 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25" exitCode=0 Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.758882 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25"} Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.759297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc"} Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.759325 4782 scope.go:117] "RemoveContainer" containerID="362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810" Feb 02 10:48:22 crc kubenswrapper[4782]: I0202 10:48:22.951524 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:48:22 crc kubenswrapper[4782]: I0202 10:48:22.952080 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:48:52 crc kubenswrapper[4782]: I0202 10:48:52.951199 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:48:52 crc kubenswrapper[4782]: I0202 10:48:52.951880 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.951364 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.952837 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.952940 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.953603 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.953685 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc" gracePeriod=600 Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903098 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc" exitCode=0 Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903202 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc"} Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c"} Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903623 4782 scope.go:117] "RemoveContainer" containerID="68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.236388 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk"] Feb 02 10:50:14 crc kubenswrapper[4782]: E0202 10:50:14.237039 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.237052 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.237136 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.237480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.243769 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vcnls"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.245490 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.258574 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.258940 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.259105 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rrcfn" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.268276 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vcnls"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.271403 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-69dk7" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.291937 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.292461 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hn2p\" (UniqueName: \"kubernetes.io/projected/9890a2a1-2fba-4553-87eb-0b70bdc93730-kube-api-access-4hn2p\") pod \"cert-manager-858654f9db-vcnls\" (UID: \"9890a2a1-2fba-4553-87eb-0b70bdc93730\") " pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.292543 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrzr\" (UniqueName: \"kubernetes.io/projected/49141326-2954-4715-aaa9-86641ac21fa9-kube-api-access-znrzr\") pod \"cert-manager-cainjector-cf98fcc89-jdfqk\" (UID: \"49141326-2954-4715-aaa9-86641ac21fa9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.301975 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9h9rr"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.302874 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.306452 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-pbkcb" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.314980 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9h9rr"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.393507 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq9ws\" (UniqueName: \"kubernetes.io/projected/d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a-kube-api-access-qq9ws\") pod \"cert-manager-webhook-687f57d79b-9h9rr\" (UID: \"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.393567 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hn2p\" (UniqueName: \"kubernetes.io/projected/9890a2a1-2fba-4553-87eb-0b70bdc93730-kube-api-access-4hn2p\") pod \"cert-manager-858654f9db-vcnls\" (UID: \"9890a2a1-2fba-4553-87eb-0b70bdc93730\") " pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.393608 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrzr\" (UniqueName: \"kubernetes.io/projected/49141326-2954-4715-aaa9-86641ac21fa9-kube-api-access-znrzr\") pod \"cert-manager-cainjector-cf98fcc89-jdfqk\" (UID: \"49141326-2954-4715-aaa9-86641ac21fa9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.415109 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrzr\" (UniqueName: \"kubernetes.io/projected/49141326-2954-4715-aaa9-86641ac21fa9-kube-api-access-znrzr\") pod \"cert-manager-cainjector-cf98fcc89-jdfqk\" (UID: \"49141326-2954-4715-aaa9-86641ac21fa9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.415185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hn2p\" (UniqueName: \"kubernetes.io/projected/9890a2a1-2fba-4553-87eb-0b70bdc93730-kube-api-access-4hn2p\") pod \"cert-manager-858654f9db-vcnls\" (UID: \"9890a2a1-2fba-4553-87eb-0b70bdc93730\") " pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.495085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq9ws\" (UniqueName: \"kubernetes.io/projected/d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a-kube-api-access-qq9ws\") pod \"cert-manager-webhook-687f57d79b-9h9rr\" (UID: \"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.517200 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq9ws\" (UniqueName: \"kubernetes.io/projected/d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a-kube-api-access-qq9ws\") pod \"cert-manager-webhook-687f57d79b-9h9rr\" (UID: \"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.563294 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.590211 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.619143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.941122 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9h9rr"] Feb 02 10:50:14 crc kubenswrapper[4782]: W0202 10:50:14.948210 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ae0a8e_231d_4be5_aa1e_ac35dfbabe4a.slice/crio-343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486 WatchSource:0}: Error finding container 343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486: Status 404 returned error can't find the container with id 343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486 Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.950881 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.037457 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk"] Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.044026 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vcnls"] Feb 02 10:50:15 crc kubenswrapper[4782]: W0202 10:50:15.050128 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9890a2a1_2fba_4553_87eb_0b70bdc93730.slice/crio-048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a WatchSource:0}: Error finding container 048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a: Status 404 returned error can't find the container with id 048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.187820 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" event={"ID":"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a","Type":"ContainerStarted","Data":"343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486"} Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.188660 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" event={"ID":"49141326-2954-4715-aaa9-86641ac21fa9","Type":"ContainerStarted","Data":"bdc99273e475184f0654c5ba31ce5697adfa1718bffcd1ef5c777b079a52d243"} Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.189499 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vcnls" event={"ID":"9890a2a1-2fba-4553-87eb-0b70bdc93730","Type":"ContainerStarted","Data":"048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.211790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vcnls" event={"ID":"9890a2a1-2fba-4553-87eb-0b70bdc93730","Type":"ContainerStarted","Data":"a22776d6b0780defeafd9f3d25867a3920ffe35dea12b0ad3f3730a8ba4093bc"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.214163 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" event={"ID":"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a","Type":"ContainerStarted","Data":"d44a928ff3ed9741cc47878fdbf1670147f808e1f7af38695ee1a61aa60ed2d9"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.214192 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.215687 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" event={"ID":"49141326-2954-4715-aaa9-86641ac21fa9","Type":"ContainerStarted","Data":"1f0178062cf20cda3075d9c6fa639b92518c00383d4805cdd887f2d8ec38fa99"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.232593 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vcnls" podStartSLOduration=2.070846003 podStartE2EDuration="5.232572617s" podCreationTimestamp="2026-02-02 10:50:14 +0000 UTC" firstStartedPulling="2026-02-02 10:50:15.051592699 +0000 UTC m=+694.935785415" lastFinishedPulling="2026-02-02 10:50:18.213319313 +0000 UTC m=+698.097512029" observedRunningTime="2026-02-02 10:50:19.231380473 +0000 UTC m=+699.115573209" watchObservedRunningTime="2026-02-02 10:50:19.232572617 +0000 UTC m=+699.116765333" Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.253569 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" podStartSLOduration=1.994345738 podStartE2EDuration="5.253547999s" podCreationTimestamp="2026-02-02 10:50:14 +0000 UTC" firstStartedPulling="2026-02-02 10:50:14.950621952 +0000 UTC m=+694.834814668" lastFinishedPulling="2026-02-02 10:50:18.209824213 +0000 UTC m=+698.094016929" observedRunningTime="2026-02-02 10:50:19.250181612 +0000 UTC m=+699.134374348" watchObservedRunningTime="2026-02-02 10:50:19.253547999 +0000 UTC m=+699.137740725" Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.310847 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" podStartSLOduration=2.086588714 podStartE2EDuration="5.310822402s" podCreationTimestamp="2026-02-02 10:50:14 +0000 UTC" firstStartedPulling="2026-02-02 10:50:15.046870233 +0000 UTC m=+694.931062949" lastFinishedPulling="2026-02-02 10:50:18.271103921 +0000 UTC m=+698.155296637" observedRunningTime="2026-02-02 10:50:19.300530057 +0000 UTC m=+699.184722773" watchObservedRunningTime="2026-02-02 10:50:19.310822402 +0000 UTC m=+699.195015118" Feb 02 10:50:24 crc kubenswrapper[4782]: I0202 10:50:24.623312 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.354901 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357172 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" containerID="cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357286 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" containerID="cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357220 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357536 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" containerID="cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357623 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" containerID="cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357595 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" containerID="cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357219 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" containerID="cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.402622 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" containerID="cri-o://81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.536680 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.538915 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-acl-logging/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539363 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-controller/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539699 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" exitCode=0 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539721 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" exitCode=0 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539728 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" exitCode=0 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539736 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" exitCode=143 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539744 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" exitCode=143 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539735 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539829 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539920 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539886 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.541785 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/2.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542304 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542340 4782 generic.go:334] "Generic (PLEG): container finished" podID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" exitCode=2 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542362 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerDied","Data":"4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542805 4782 scope.go:117] "RemoveContainer" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.543104 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fsqgq_openshift-multus(04d9744a-e730-45b4-9f0c-bbb5b02cd311)\"" pod="openshift-multus/multus-fsqgq" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.579722 4782 scope.go:117] "RemoveContainer" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.735385 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-acl-logging/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.736187 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-controller/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.736612 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801034 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zlv8v"] Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801346 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801372 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801399 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801406 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801414 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801420 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801433 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801441 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801451 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801457 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801469 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801475 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801486 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801494 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801503 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801511 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801520 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801526 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801535 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801541 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801546 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801551 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801562 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kubecfg-setup" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801575 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kubecfg-setup" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801722 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801732 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801741 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801749 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801758 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801768 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801775 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801783 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801791 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801797 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801897 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801904 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801997 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.802007 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.803886 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841033 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841103 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841128 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841149 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash" (OuterVolumeSpecName: "host-slash") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841188 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841213 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841263 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841288 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841307 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841319 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841411 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841339 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket" (OuterVolumeSpecName: "log-socket") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841354 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841454 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841459 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841478 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841477 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841521 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841537 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841585 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841600 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841618 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841654 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841841 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841870 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-config\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-env-overrides\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841932 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-script-lib\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841961 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-systemd-units\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-etc-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842037 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovn-node-metrics-cert\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842084 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-netns\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842118 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-systemd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-node-log\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842157 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-var-lib-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-slash\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842203 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-kubelet\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842216 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-netd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-ovn\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842309 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-bin\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841535 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841561 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842161 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmqq\" (UniqueName: \"kubernetes.io/projected/2c8c681f-aeb6-4a76-ac30-9be1d209865c-kube-api-access-dlmqq\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842409 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842434 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log" (OuterVolumeSpecName: "node-log") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842463 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842538 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-log-socket\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842571 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843588 4782 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843614 4782 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843633 4782 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843808 4782 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843838 4782 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843875 4782 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843893 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843904 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843914 4782 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843923 4782 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843932 4782 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843941 4782 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843951 4782 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843960 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843970 4782 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843979 4782 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.847775 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.847970 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt" (OuterVolumeSpecName: "kube-api-access-g8flt") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "kube-api-access-g8flt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.858130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948093 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-systemd-units\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948192 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-etc-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovn-node-metrics-cert\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948248 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-systemd-units\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-netns\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948318 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-etc-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948375 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-netns\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948398 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948451 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948467 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-systemd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-systemd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948519 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-node-log\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-node-log\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948593 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-slash\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948633 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-var-lib-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-var-lib-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-slash\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948765 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-kubelet\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-kubelet\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-netd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-ovn\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948885 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-bin\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmqq\" (UniqueName: \"kubernetes.io/projected/2c8c681f-aeb6-4a76-ac30-9be1d209865c-kube-api-access-dlmqq\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-log-socket\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948943 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-netd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949010 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-bin\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949038 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-log-socket\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949020 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-config\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-ovn\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949076 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-env-overrides\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949241 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-script-lib\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949362 4782 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949379 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949395 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949408 4782 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949046 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.950083 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-config\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.950076 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-env-overrides\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.950486 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-script-lib\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.954595 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovn-node-metrics-cert\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.965504 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmqq\" (UniqueName: \"kubernetes.io/projected/2c8c681f-aeb6-4a76-ac30-9be1d209865c-kube-api-access-dlmqq\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.119324 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:40 crc kubenswrapper[4782]: W0202 10:50:40.138868 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c8c681f_aeb6_4a76_ac30_9be1d209865c.slice/crio-e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb WatchSource:0}: Error finding container e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb: Status 404 returned error can't find the container with id e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.548297 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/2.log" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.550527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.553613 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-acl-logging/0.log" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554022 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-controller/0.log" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554319 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" exitCode=0 Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554346 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" exitCode=0 Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554355 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" exitCode=0 Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554375 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554393 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554411 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554426 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554564 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.574152 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.590832 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.596019 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.603534 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.622824 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.635829 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.647170 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.659960 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.672981 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.685692 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.703563 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.704470 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.704524 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} err="failed to get container status \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.704552 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.705143 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705192 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} err="failed to get container status \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705205 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.705512 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705533 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} err="failed to get container status \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705572 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.705933 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705977 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} err="failed to get container status \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705993 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.706258 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706282 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} err="failed to get container status \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706325 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.706537 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706579 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} err="failed to get container status \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706603 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.706916 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706933 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} err="failed to get container status \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706964 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.707173 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707192 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} err="failed to get container status \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707203 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.707483 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707500 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} err="failed to get container status \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707511 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707781 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} err="failed to get container status \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707796 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708001 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} err="failed to get container status \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708127 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708504 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} err="failed to get container status \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708520 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708878 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} err="failed to get container status \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708915 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709123 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} err="failed to get container status \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709209 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709511 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} err="failed to get container status \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709593 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709869 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} err="failed to get container status \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709910 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710160 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} err="failed to get container status \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710243 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710507 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} err="failed to get container status \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710531 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710811 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} err="failed to get container status \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710837 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711105 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} err="failed to get container status \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711183 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711614 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} err="failed to get container status \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711666 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711881 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} err="failed to get container status \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711947 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712345 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} err="failed to get container status \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712360 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712618 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} err="failed to get container status \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712672 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713355 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} err="failed to get container status \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713443 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713937 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} err="failed to get container status \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713985 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.714478 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} err="failed to get container status \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.828094 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" path="/var/lib/kubelet/pods/2642ee4e-c16a-4e6e-9654-a67666f1bff8/volumes" Feb 02 10:50:41 crc kubenswrapper[4782]: I0202 10:50:41.561591 4782 generic.go:334] "Generic (PLEG): container finished" podID="2c8c681f-aeb6-4a76-ac30-9be1d209865c" containerID="5ce490632e0e45fd9754c7134d7ed0e71a0d338a0cf7b4881b6d2561654a0c06" exitCode=0 Feb 02 10:50:41 crc kubenswrapper[4782]: I0202 10:50:41.562116 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerDied","Data":"5ce490632e0e45fd9754c7134d7ed0e71a0d338a0cf7b4881b6d2561654a0c06"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570334 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"aac2f263520be40956e4a6ea16a75574100028c7646dafc0b19277dc0ec03cd0"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"7343456d655a488d530594b46b16922b6875c61d12de8cf3cf349fffb8a151aa"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"f2bfd3065549730109c035db682820c5a4ab2a5beba6316120a466df0e63896f"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570699 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"5abcf4333986001688fc89c1cfa5270cd27600afc7c104877fd9352aa945c10a"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570707 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"884bb97ccd4cf2ced83d472174c0609fa08c31d64d6d89e21a927903b25e59eb"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570716 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"53aeaf82f022d482ed01e2e5a8f28a4d5c73360f84f6dd8469caa5c7683a0e7e"} Feb 02 10:50:45 crc kubenswrapper[4782]: I0202 10:50:45.592797 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"5756d801fb9c220d67548519b3924fcd0c55d6119c0c928303ef3b85ce7bcc14"} Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.607786 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"1eecee32e70d66f7c75c72f0a87c431487d944eed52f36fe31af53a38518ddf7"} Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.608225 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.649786 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.681316 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" podStartSLOduration=8.681290323 podStartE2EDuration="8.681290323s" podCreationTimestamp="2026-02-02 10:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:50:47.64485594 +0000 UTC m=+727.529048656" watchObservedRunningTime="2026-02-02 10:50:47.681290323 +0000 UTC m=+727.565483059" Feb 02 10:50:48 crc kubenswrapper[4782]: I0202 10:50:48.612927 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:48 crc kubenswrapper[4782]: I0202 10:50:48.613291 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:48 crc kubenswrapper[4782]: I0202 10:50:48.646446 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:53 crc kubenswrapper[4782]: I0202 10:50:53.820972 4782 scope.go:117] "RemoveContainer" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" Feb 02 10:50:53 crc kubenswrapper[4782]: E0202 10:50:53.821936 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fsqgq_openshift-multus(04d9744a-e730-45b4-9f0c-bbb5b02cd311)\"" pod="openshift-multus/multus-fsqgq" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.917146 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq"] Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.918580 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.920274 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.926595 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.926629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.926792 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.930581 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq"] Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.027964 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028020 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028530 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028711 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.046250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.235877 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263333 4782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263417 4782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263437 4782 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263474 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" podUID="c86f666c-8701-45f8-a488-85b4052a02db" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.703912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.704619 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723817 4782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723879 4782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723910 4782 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723957 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" podUID="c86f666c-8701-45f8-a488-85b4052a02db" Feb 02 10:51:07 crc kubenswrapper[4782]: I0202 10:51:07.820998 4782 scope.go:117] "RemoveContainer" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" Feb 02 10:51:08 crc kubenswrapper[4782]: I0202 10:51:08.739038 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/2.log" Feb 02 10:51:08 crc kubenswrapper[4782]: I0202 10:51:08.739450 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"ffd35c81492028424ba964f22ddd18326ce64e1a4f31005f5449e7599e8c0b1e"} Feb 02 10:51:10 crc kubenswrapper[4782]: I0202 10:51:10.141330 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:51:13 crc kubenswrapper[4782]: I0202 10:51:13.820438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:13 crc kubenswrapper[4782]: I0202 10:51:13.821034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:13 crc kubenswrapper[4782]: I0202 10:51:13.995017 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq"] Feb 02 10:51:14 crc kubenswrapper[4782]: W0202 10:51:14.002863 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86f666c_8701_45f8_a488_85b4052a02db.slice/crio-36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38 WatchSource:0}: Error finding container 36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38: Status 404 returned error can't find the container with id 36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38 Feb 02 10:51:14 crc kubenswrapper[4782]: I0202 10:51:14.783758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerStarted","Data":"ba6e55759dbdc6b8180045454740c177751f8882ce6ee7422fdeb19d17838ef2"} Feb 02 10:51:14 crc kubenswrapper[4782]: I0202 10:51:14.783805 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerStarted","Data":"36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38"} Feb 02 10:51:15 crc kubenswrapper[4782]: I0202 10:51:15.790780 4782 generic.go:334] "Generic (PLEG): container finished" podID="c86f666c-8701-45f8-a488-85b4052a02db" containerID="ba6e55759dbdc6b8180045454740c177751f8882ce6ee7422fdeb19d17838ef2" exitCode=0 Feb 02 10:51:15 crc kubenswrapper[4782]: I0202 10:51:15.790849 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"ba6e55759dbdc6b8180045454740c177751f8882ce6ee7422fdeb19d17838ef2"} Feb 02 10:51:17 crc kubenswrapper[4782]: I0202 10:51:17.806960 4782 generic.go:334] "Generic (PLEG): container finished" podID="c86f666c-8701-45f8-a488-85b4052a02db" containerID="629031d46e3e0518550e214f208634a173a18e028d11714a15d92236ab28b3b2" exitCode=0 Feb 02 10:51:17 crc kubenswrapper[4782]: I0202 10:51:17.807088 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"629031d46e3e0518550e214f208634a173a18e028d11714a15d92236ab28b3b2"} Feb 02 10:51:18 crc kubenswrapper[4782]: I0202 10:51:18.818409 4782 generic.go:334] "Generic (PLEG): container finished" podID="c86f666c-8701-45f8-a488-85b4052a02db" containerID="f92f038867a3ca5b5c1ca0c6dfe77d3d8810d5279cc2137514daf33b95ebb100" exitCode=0 Feb 02 10:51:18 crc kubenswrapper[4782]: I0202 10:51:18.818490 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"f92f038867a3ca5b5c1ca0c6dfe77d3d8810d5279cc2137514daf33b95ebb100"} Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.050216 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.156002 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"c86f666c-8701-45f8-a488-85b4052a02db\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.156043 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"c86f666c-8701-45f8-a488-85b4052a02db\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.156070 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"c86f666c-8701-45f8-a488-85b4052a02db\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.157612 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle" (OuterVolumeSpecName: "bundle") pod "c86f666c-8701-45f8-a488-85b4052a02db" (UID: "c86f666c-8701-45f8-a488-85b4052a02db"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.164439 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8" (OuterVolumeSpecName: "kube-api-access-jshv8") pod "c86f666c-8701-45f8-a488-85b4052a02db" (UID: "c86f666c-8701-45f8-a488-85b4052a02db"). InnerVolumeSpecName "kube-api-access-jshv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.177268 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util" (OuterVolumeSpecName: "util") pod "c86f666c-8701-45f8-a488-85b4052a02db" (UID: "c86f666c-8701-45f8-a488-85b4052a02db"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.257208 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.257241 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.257253 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.833383 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38"} Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.833423 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.833496 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.560590 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pfjs6"] Feb 02 10:51:23 crc kubenswrapper[4782]: E0202 10:51:23.561064 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="extract" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561077 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="extract" Feb 02 10:51:23 crc kubenswrapper[4782]: E0202 10:51:23.561087 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="util" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561093 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="util" Feb 02 10:51:23 crc kubenswrapper[4782]: E0202 10:51:23.561105 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="pull" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561111 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="pull" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561199 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="extract" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561576 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.564728 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.564828 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.565091 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-gfhmk" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.583413 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pfjs6"] Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.592828 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xwb\" (UniqueName: \"kubernetes.io/projected/371da653-9a38-424f-9069-14e251c45e1b-kube-api-access-r8xwb\") pod \"nmstate-operator-646758c888-pfjs6\" (UID: \"371da653-9a38-424f-9069-14e251c45e1b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.694374 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xwb\" (UniqueName: \"kubernetes.io/projected/371da653-9a38-424f-9069-14e251c45e1b-kube-api-access-r8xwb\") pod \"nmstate-operator-646758c888-pfjs6\" (UID: \"371da653-9a38-424f-9069-14e251c45e1b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.719652 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xwb\" (UniqueName: \"kubernetes.io/projected/371da653-9a38-424f-9069-14e251c45e1b-kube-api-access-r8xwb\") pod \"nmstate-operator-646758c888-pfjs6\" (UID: \"371da653-9a38-424f-9069-14e251c45e1b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.875382 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:24 crc kubenswrapper[4782]: I0202 10:51:24.100057 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pfjs6"] Feb 02 10:51:24 crc kubenswrapper[4782]: I0202 10:51:24.857009 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" event={"ID":"371da653-9a38-424f-9069-14e251c45e1b","Type":"ContainerStarted","Data":"0f44dd75a2ed5b556d822e41b4a5da8f95665d539bdd870f6fbb7e6dcd51265b"} Feb 02 10:51:26 crc kubenswrapper[4782]: I0202 10:51:26.872598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" event={"ID":"371da653-9a38-424f-9069-14e251c45e1b","Type":"ContainerStarted","Data":"762c53de67f3c15bcb882523e6fea111f9c62a41cff007eb7a56edcc79553d3c"} Feb 02 10:51:26 crc kubenswrapper[4782]: I0202 10:51:26.891416 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" podStartSLOduration=1.649359283 podStartE2EDuration="3.891397749s" podCreationTimestamp="2026-02-02 10:51:23 +0000 UTC" firstStartedPulling="2026-02-02 10:51:24.105750175 +0000 UTC m=+763.989942891" lastFinishedPulling="2026-02-02 10:51:26.347788651 +0000 UTC m=+766.231981357" observedRunningTime="2026-02-02 10:51:26.88968543 +0000 UTC m=+766.773878146" watchObservedRunningTime="2026-02-02 10:51:26.891397749 +0000 UTC m=+766.775590465" Feb 02 10:51:31 crc kubenswrapper[4782]: I0202 10:51:31.645824 4782 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.335205 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djhxz"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.336373 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.341791 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jd8tl" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.343677 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.344679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.346401 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.358491 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.364397 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djhxz"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.397435 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wjctm"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.398796 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4dm\" (UniqueName: \"kubernetes.io/projected/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-kube-api-access-vs4dm\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514586 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-dbus-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvwh\" (UniqueName: \"kubernetes.io/projected/a30862c2-daa1-42d6-8815-aabc8387e789-kube-api-access-ltvwh\") pod \"nmstate-metrics-54757c584b-djhxz\" (UID: \"a30862c2-daa1-42d6-8815-aabc8387e789\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514749 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m98t\" (UniqueName: \"kubernetes.io/projected/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-kube-api-access-7m98t\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514792 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-nmstate-lock\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514866 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-ovs-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514912 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.528129 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.528957 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.531322 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rrkgq" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.531322 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.531789 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.571485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.615996 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-ovs-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616060 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8rk\" (UniqueName: \"kubernetes.io/projected/00048f8e-9669-413d-b215-6a787d5270c0-kube-api-access-wz8rk\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616094 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00048f8e-9669-413d-b215-6a787d5270c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616155 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4dm\" (UniqueName: \"kubernetes.io/projected/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-kube-api-access-vs4dm\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616183 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-dbus-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616234 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvwh\" (UniqueName: \"kubernetes.io/projected/a30862c2-daa1-42d6-8815-aabc8387e789-kube-api-access-ltvwh\") pod \"nmstate-metrics-54757c584b-djhxz\" (UID: \"a30862c2-daa1-42d6-8815-aabc8387e789\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00048f8e-9669-413d-b215-6a787d5270c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m98t\" (UniqueName: \"kubernetes.io/projected/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-kube-api-access-7m98t\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-nmstate-lock\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616396 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-nmstate-lock\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616478 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-ovs-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: E0202 10:51:32.616572 4782 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 10:51:32 crc kubenswrapper[4782]: E0202 10:51:32.616650 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair podName:cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a nodeName:}" failed. No retries permitted until 2026-02-02 10:51:33.116602271 +0000 UTC m=+773.000794987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jpc2k" (UID: "cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a") : secret "openshift-nmstate-webhook" not found Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.617270 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-dbus-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.637704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m98t\" (UniqueName: \"kubernetes.io/projected/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-kube-api-access-7m98t\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.659462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4dm\" (UniqueName: \"kubernetes.io/projected/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-kube-api-access-vs4dm\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.664394 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvwh\" (UniqueName: \"kubernetes.io/projected/a30862c2-daa1-42d6-8815-aabc8387e789-kube-api-access-ltvwh\") pod \"nmstate-metrics-54757c584b-djhxz\" (UID: \"a30862c2-daa1-42d6-8815-aabc8387e789\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.668590 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.716809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8rk\" (UniqueName: \"kubernetes.io/projected/00048f8e-9669-413d-b215-6a787d5270c0-kube-api-access-wz8rk\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.716853 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00048f8e-9669-413d-b215-6a787d5270c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.716918 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00048f8e-9669-413d-b215-6a787d5270c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.718133 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00048f8e-9669-413d-b215-6a787d5270c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.720381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00048f8e-9669-413d-b215-6a787d5270c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.729184 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.741246 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8rk\" (UniqueName: \"kubernetes.io/projected/00048f8e-9669-413d-b215-6a787d5270c0-kube-api-access-wz8rk\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.772994 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-595664cbc7-qhdgt"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.773763 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.796577 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595664cbc7-qhdgt"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.817935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-oauth-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.817990 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-service-ca\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818020 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8kg\" (UniqueName: \"kubernetes.io/projected/7cbf3206-6442-45fd-a75d-3d47f579b2f7-kube-api-access-8q8kg\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818078 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-oauth-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818153 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-trusted-ca-bundle\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.847308 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918805 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wjctm" event={"ID":"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe","Type":"ContainerStarted","Data":"a71dcffdeb4c13a3e0e1af77aade9350bdb5e901d02c9c27aef428120219f775"} Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-service-ca\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918868 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8kg\" (UniqueName: \"kubernetes.io/projected/7cbf3206-6442-45fd-a75d-3d47f579b2f7-kube-api-access-8q8kg\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918918 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-oauth-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918975 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-trusted-ca-bundle\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.919007 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-oauth-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.919758 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-service-ca\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.921235 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.921718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-trusted-ca-bundle\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.922243 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-oauth-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.924791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.925573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-oauth-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.947682 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8kg\" (UniqueName: \"kubernetes.io/projected/7cbf3206-6442-45fd-a75d-3d47f579b2f7-kube-api-access-8q8kg\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.993095 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djhxz"] Feb 02 10:51:32 crc kubenswrapper[4782]: W0202 10:51:32.998185 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda30862c2_daa1_42d6_8815_aabc8387e789.slice/crio-71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c WatchSource:0}: Error finding container 71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c: Status 404 returned error can't find the container with id 71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.092597 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.120813 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.124680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.261255 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595664cbc7-qhdgt"] Feb 02 10:51:33 crc kubenswrapper[4782]: W0202 10:51:33.267766 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cbf3206_6442_45fd_a75d_3d47f579b2f7.slice/crio-de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6 WatchSource:0}: Error finding container de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6: Status 404 returned error can't find the container with id de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6 Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.280277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.303177 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7"] Feb 02 10:51:33 crc kubenswrapper[4782]: W0202 10:51:33.324292 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00048f8e_9669_413d_b215_6a787d5270c0.slice/crio-fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570 WatchSource:0}: Error finding container fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570: Status 404 returned error can't find the container with id fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570 Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.477766 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k"] Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.924477 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" event={"ID":"00048f8e-9669-413d-b215-6a787d5270c0","Type":"ContainerStarted","Data":"fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.925506 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" event={"ID":"a30862c2-daa1-42d6-8815-aabc8387e789","Type":"ContainerStarted","Data":"71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.927526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595664cbc7-qhdgt" event={"ID":"7cbf3206-6442-45fd-a75d-3d47f579b2f7","Type":"ContainerStarted","Data":"e52b9f87e0d0b14037a16d805a0678c32be7d06872e8d1b75c34a6183d08595d"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.927573 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595664cbc7-qhdgt" event={"ID":"7cbf3206-6442-45fd-a75d-3d47f579b2f7","Type":"ContainerStarted","Data":"de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.928527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" event={"ID":"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a","Type":"ContainerStarted","Data":"436d0675473f76cfa68423a2c317248aedd4d06aca69b5fdd653c1d1a7cf4a9b"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.954252 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-595664cbc7-qhdgt" podStartSLOduration=1.954235054 podStartE2EDuration="1.954235054s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:33.951535756 +0000 UTC m=+773.835728472" watchObservedRunningTime="2026-02-02 10:51:33.954235054 +0000 UTC m=+773.838427800" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.957570 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" event={"ID":"00048f8e-9669-413d-b215-6a787d5270c0","Type":"ContainerStarted","Data":"6f0690fcc12f010bcece1a95166689701dd5002a69db0b41dc27fd99226f8a8d"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.960314 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wjctm" event={"ID":"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe","Type":"ContainerStarted","Data":"e7123dfee0613507431c34aae9d14d2379c0940c74a78ca7a73bae106eee75d0"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.960446 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.962119 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" event={"ID":"a30862c2-daa1-42d6-8815-aabc8387e789","Type":"ContainerStarted","Data":"de01a463285691edddbbd2764b18eb42839959c6a2ba12dc542863b806587d72"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.963354 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" event={"ID":"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a","Type":"ContainerStarted","Data":"989eb95748fcb05882a334056d6b25c40bc9071e9b98fe2d87c90bbd091ae8b2"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.963889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.980782 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" podStartSLOduration=2.435103086 podStartE2EDuration="4.980764811s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:33.327241229 +0000 UTC m=+773.211433935" lastFinishedPulling="2026-02-02 10:51:35.872902944 +0000 UTC m=+775.757095660" observedRunningTime="2026-02-02 10:51:36.970454266 +0000 UTC m=+776.854646982" watchObservedRunningTime="2026-02-02 10:51:36.980764811 +0000 UTC m=+776.864957527" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.995689 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" podStartSLOduration=2.576814422 podStartE2EDuration="4.995665838s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:33.48624138 +0000 UTC m=+773.370434096" lastFinishedPulling="2026-02-02 10:51:35.905092796 +0000 UTC m=+775.789285512" observedRunningTime="2026-02-02 10:51:36.993586088 +0000 UTC m=+776.877778804" watchObservedRunningTime="2026-02-02 10:51:36.995665838 +0000 UTC m=+776.879858554" Feb 02 10:51:37 crc kubenswrapper[4782]: I0202 10:51:37.015873 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wjctm" podStartSLOduration=1.922558996 podStartE2EDuration="5.015858565s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:32.784013882 +0000 UTC m=+772.668206598" lastFinishedPulling="2026-02-02 10:51:35.877313451 +0000 UTC m=+775.761506167" observedRunningTime="2026-02-02 10:51:37.012444288 +0000 UTC m=+776.896637004" watchObservedRunningTime="2026-02-02 10:51:37.015858565 +0000 UTC m=+776.900051281" Feb 02 10:51:38 crc kubenswrapper[4782]: I0202 10:51:38.979658 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" event={"ID":"a30862c2-daa1-42d6-8815-aabc8387e789","Type":"ContainerStarted","Data":"df8a5c5a46740e81ef06ab89c4d49fe2c539d350c88951e66d8be6ed4c08a9c5"} Feb 02 10:51:38 crc kubenswrapper[4782]: I0202 10:51:38.995491 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" podStartSLOduration=1.5740391919999999 podStartE2EDuration="6.995472271s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:33.000609851 +0000 UTC m=+772.884802577" lastFinishedPulling="2026-02-02 10:51:38.42204294 +0000 UTC m=+778.306235656" observedRunningTime="2026-02-02 10:51:38.993274858 +0000 UTC m=+778.877467604" watchObservedRunningTime="2026-02-02 10:51:38.995472271 +0000 UTC m=+778.879664987" Feb 02 10:51:42 crc kubenswrapper[4782]: I0202 10:51:42.765741 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:43 crc kubenswrapper[4782]: I0202 10:51:43.093562 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:43 crc kubenswrapper[4782]: I0202 10:51:43.093836 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:43 crc kubenswrapper[4782]: I0202 10:51:43.098183 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:44 crc kubenswrapper[4782]: I0202 10:51:44.008724 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:44 crc kubenswrapper[4782]: I0202 10:51:44.088988 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:51:52 crc kubenswrapper[4782]: I0202 10:51:52.951504 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:51:52 crc kubenswrapper[4782]: I0202 10:51:52.951823 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:51:53 crc kubenswrapper[4782]: I0202 10:51:53.290047 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.008728 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6"] Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.010555 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.012393 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.027518 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6"] Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.067409 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.067455 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.067481 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168323 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168448 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168485 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168975 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.188676 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.327125 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.517606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6"] Feb 02 10:52:07 crc kubenswrapper[4782]: I0202 10:52:07.141814 4782 generic.go:334] "Generic (PLEG): container finished" podID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerID="0469cc9b81c04188639a62db67580dcb6eff0a9ec2ce428ea2ea4d74ade63f63" exitCode=0 Feb 02 10:52:07 crc kubenswrapper[4782]: I0202 10:52:07.141858 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"0469cc9b81c04188639a62db67580dcb6eff0a9ec2ce428ea2ea4d74ade63f63"} Feb 02 10:52:07 crc kubenswrapper[4782]: I0202 10:52:07.141884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerStarted","Data":"f7fc72918ffce23fb6b318f086b25ec3564f4c9b6e4fd0bad204d7e25d878f2f"} Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.133300 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" containerID="cri-o://8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" gracePeriod=15 Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.156726 4782 generic.go:334] "Generic (PLEG): container finished" podID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerID="a8f8070e7407c219db31a43339188edbfa511a91d6df0ee046c8e116c7be5f24" exitCode=0 Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.156784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"a8f8070e7407c219db31a43339188edbfa511a91d6df0ee046c8e116c7be5f24"} Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.530594 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sf9m8_76afda26-696c-4996-bc58-1c928e4fa92a/console/0.log" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.530691 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.582785 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:09 crc kubenswrapper[4782]: E0202 10:52:09.583149 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.583162 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.583266 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.584125 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.588257 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615085 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615141 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615219 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615296 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615318 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615390 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615586 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615710 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616026 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616063 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca" (OuterVolumeSpecName: "service-ca") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616321 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config" (OuterVolumeSpecName: "console-config") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616578 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.623941 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5" (OuterVolumeSpecName: "kube-api-access-bklv5") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "kube-api-access-bklv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.636754 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.640101 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.716904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717538 4782 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717557 4782 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717543 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717569 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717611 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717623 4782 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717658 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717675 4782 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.745950 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.902206 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.143887 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.166313 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerStarted","Data":"76b887cf6cdadf5ce40b5ae85a2c309ce7496129ec1523601bbfaf633d9f7f8d"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170734 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sf9m8_76afda26-696c-4996-bc58-1c928e4fa92a/console/0.log" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170815 4782 generic.go:334] "Generic (PLEG): container finished" podID="76afda26-696c-4996-bc58-1c928e4fa92a" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" exitCode=2 Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170846 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerDied","Data":"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170941 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerDied","Data":"a512fcceae6cfaeaad197794b5b6c708f15cf79898b3102381c39333768e348a"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170963 4782 scope.go:117] "RemoveContainer" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.182329 4782 generic.go:334] "Generic (PLEG): container finished" podID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerID="4de6411a4afb22cf2605ed9637f2740d5b4eae8aba99b0e5f2cfc322dd434901" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.182379 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"4de6411a4afb22cf2605ed9637f2740d5b4eae8aba99b0e5f2cfc322dd434901"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.203729 4782 scope.go:117] "RemoveContainer" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" Feb 02 10:52:10 crc kubenswrapper[4782]: E0202 10:52:10.205527 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee\": container with ID starting with 8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee not found: ID does not exist" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.205554 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee"} err="failed to get container status \"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee\": rpc error: code = NotFound desc = could not find container \"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee\": container with ID starting with 8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee not found: ID does not exist" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.232711 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.236431 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:52:10 crc kubenswrapper[4782]: E0202 10:52:10.237714 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76afda26_696c_4996_bc58_1c928e4fa92a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.837320 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" path="/var/lib/kubelet/pods/76afda26-696c-4996-bc58-1c928e4fa92a/volumes" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.188476 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" exitCode=0 Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.188794 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8"} Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.428768 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.447103 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.447223 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.447357 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.449851 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle" (OuterVolumeSpecName: "bundle") pod "499d9fd2-e479-4774-ad4b-aaefa3ac9026" (UID: "499d9fd2-e479-4774-ad4b-aaefa3ac9026"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.454170 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z" (OuterVolumeSpecName: "kube-api-access-c566z") pod "499d9fd2-e479-4774-ad4b-aaefa3ac9026" (UID: "499d9fd2-e479-4774-ad4b-aaefa3ac9026"). InnerVolumeSpecName "kube-api-access-c566z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.475213 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util" (OuterVolumeSpecName: "util") pod "499d9fd2-e479-4774-ad4b-aaefa3ac9026" (UID: "499d9fd2-e479-4774-ad4b-aaefa3ac9026"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.550279 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.550543 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.550658 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4782]: I0202 10:52:12.197342 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"f7fc72918ffce23fb6b318f086b25ec3564f4c9b6e4fd0bad204d7e25d878f2f"} Feb 02 10:52:12 crc kubenswrapper[4782]: I0202 10:52:12.197724 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fc72918ffce23fb6b318f086b25ec3564f4c9b6e4fd0bad204d7e25d878f2f" Feb 02 10:52:12 crc kubenswrapper[4782]: I0202 10:52:12.197448 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:13 crc kubenswrapper[4782]: I0202 10:52:13.205494 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" exitCode=0 Feb 02 10:52:13 crc kubenswrapper[4782]: I0202 10:52:13.205543 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c"} Feb 02 10:52:14 crc kubenswrapper[4782]: I0202 10:52:14.213052 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerStarted","Data":"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691"} Feb 02 10:52:14 crc kubenswrapper[4782]: I0202 10:52:14.247943 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xnj2n" podStartSLOduration=2.592860101 podStartE2EDuration="5.247924678s" podCreationTimestamp="2026-02-02 10:52:09 +0000 UTC" firstStartedPulling="2026-02-02 10:52:11.191548875 +0000 UTC m=+811.075741591" lastFinishedPulling="2026-02-02 10:52:13.846613452 +0000 UTC m=+813.730806168" observedRunningTime="2026-02-02 10:52:14.244189321 +0000 UTC m=+814.128382057" watchObservedRunningTime="2026-02-02 10:52:14.247924678 +0000 UTC m=+814.132117394" Feb 02 10:52:19 crc kubenswrapper[4782]: I0202 10:52:19.902868 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:19 crc kubenswrapper[4782]: I0202 10:52:19.904532 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:20 crc kubenswrapper[4782]: I0202 10:52:20.958211 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xnj2n" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" probeResult="failure" output=< Feb 02 10:52:20 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:52:20 crc kubenswrapper[4782]: > Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.719117 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm"] Feb 02 10:52:22 crc kubenswrapper[4782]: E0202 10:52:22.720127 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="util" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720220 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="util" Feb 02 10:52:22 crc kubenswrapper[4782]: E0202 10:52:22.720286 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="extract" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720334 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="extract" Feb 02 10:52:22 crc kubenswrapper[4782]: E0202 10:52:22.720382 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="pull" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720434 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="pull" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720575 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="extract" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.721037 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.724438 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725028 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hdptp" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725265 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725367 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725377 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.747758 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm"] Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.893704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-webhook-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.893775 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-apiservice-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.893824 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjnrs\" (UniqueName: \"kubernetes.io/projected/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-kube-api-access-bjnrs\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.951126 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.951174 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.994904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjnrs\" (UniqueName: \"kubernetes.io/projected/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-kube-api-access-bjnrs\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.994960 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-webhook-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.995001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-apiservice-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.002911 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-apiservice-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.015994 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-webhook-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.020854 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt"] Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.021608 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.022770 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjnrs\" (UniqueName: \"kubernetes.io/projected/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-kube-api-access-bjnrs\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.034180 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.034180 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qvmq5" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.034180 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.038119 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.044502 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt"] Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.197818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-webhook-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.197936 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dbw\" (UniqueName: \"kubernetes.io/projected/78f09d2d-237b-4474-b4b8-f59f49997e44-kube-api-access-76dbw\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.197981 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-apiservice-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.299327 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76dbw\" (UniqueName: \"kubernetes.io/projected/78f09d2d-237b-4474-b4b8-f59f49997e44-kube-api-access-76dbw\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.299394 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-apiservice-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.299443 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-webhook-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.304728 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-apiservice-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.313489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-webhook-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.321072 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76dbw\" (UniqueName: \"kubernetes.io/projected/78f09d2d-237b-4474-b4b8-f59f49997e44-kube-api-access-76dbw\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.326377 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm"] Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.430782 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.681577 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt"] Feb 02 10:52:23 crc kubenswrapper[4782]: W0202 10:52:23.705154 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78f09d2d_237b_4474_b4b8_f59f49997e44.slice/crio-99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b WatchSource:0}: Error finding container 99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b: Status 404 returned error can't find the container with id 99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b Feb 02 10:52:24 crc kubenswrapper[4782]: I0202 10:52:24.262030 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" event={"ID":"78f09d2d-237b-4474-b4b8-f59f49997e44","Type":"ContainerStarted","Data":"99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b"} Feb 02 10:52:24 crc kubenswrapper[4782]: I0202 10:52:24.263855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" event={"ID":"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9","Type":"ContainerStarted","Data":"d07cae36a89a00c97ff6399dcf9907b4d2c742ae250c0b8dee12a30c246a3562"} Feb 02 10:52:29 crc kubenswrapper[4782]: I0202 10:52:29.977847 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.037326 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.224551 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.310071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" event={"ID":"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9","Type":"ContainerStarted","Data":"09dfd459f1a5f0376ed1406f62a25382bfdf5c3f6aabcaca55c22c6b259e4990"} Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.310169 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.311498 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" event={"ID":"78f09d2d-237b-4474-b4b8-f59f49997e44","Type":"ContainerStarted","Data":"3d3636c0bba62836323e5623733d207bd7e79f36e27f04f7f728791486bc5539"} Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.328202 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" podStartSLOduration=2.499391369 podStartE2EDuration="8.328184795s" podCreationTimestamp="2026-02-02 10:52:22 +0000 UTC" firstStartedPulling="2026-02-02 10:52:23.334014866 +0000 UTC m=+823.218207582" lastFinishedPulling="2026-02-02 10:52:29.162808292 +0000 UTC m=+829.047001008" observedRunningTime="2026-02-02 10:52:30.327473655 +0000 UTC m=+830.211666371" watchObservedRunningTime="2026-02-02 10:52:30.328184795 +0000 UTC m=+830.212377511" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.352773 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" podStartSLOduration=2.8791660869999998 podStartE2EDuration="8.352751138s" podCreationTimestamp="2026-02-02 10:52:22 +0000 UTC" firstStartedPulling="2026-02-02 10:52:23.71048293 +0000 UTC m=+823.594675646" lastFinishedPulling="2026-02-02 10:52:29.184067981 +0000 UTC m=+829.068260697" observedRunningTime="2026-02-02 10:52:30.350269497 +0000 UTC m=+830.234462223" watchObservedRunningTime="2026-02-02 10:52:30.352751138 +0000 UTC m=+830.236943864" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.317189 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.317375 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xnj2n" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" containerID="cri-o://b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" gracePeriod=2 Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.717680 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.847779 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.847889 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.847944 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.849104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities" (OuterVolumeSpecName: "utilities") pod "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" (UID: "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.864805 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x" (OuterVolumeSpecName: "kube-api-access-wfw4x") pod "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" (UID: "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019"). InnerVolumeSpecName "kube-api-access-wfw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.950341 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.950683 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.968296 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" (UID: "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.051968 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.324881 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" exitCode=0 Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.324978 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691"} Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.325023 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"76b887cf6cdadf5ce40b5ae85a2c309ce7496129ec1523601bbfaf633d9f7f8d"} Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.325044 4782 scope.go:117] "RemoveContainer" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.325978 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.352676 4782 scope.go:117] "RemoveContainer" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.361629 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.373489 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.382961 4782 scope.go:117] "RemoveContainer" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.401951 4782 scope.go:117] "RemoveContainer" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" Feb 02 10:52:32 crc kubenswrapper[4782]: E0202 10:52:32.402890 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691\": container with ID starting with b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691 not found: ID does not exist" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.403018 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691"} err="failed to get container status \"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691\": rpc error: code = NotFound desc = could not find container \"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691\": container with ID starting with b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691 not found: ID does not exist" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.403124 4782 scope.go:117] "RemoveContainer" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" Feb 02 10:52:32 crc kubenswrapper[4782]: E0202 10:52:32.404798 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c\": container with ID starting with a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c not found: ID does not exist" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.404944 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c"} err="failed to get container status \"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c\": rpc error: code = NotFound desc = could not find container \"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c\": container with ID starting with a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c not found: ID does not exist" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.405053 4782 scope.go:117] "RemoveContainer" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" Feb 02 10:52:32 crc kubenswrapper[4782]: E0202 10:52:32.405419 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8\": container with ID starting with 24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8 not found: ID does not exist" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.405531 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8"} err="failed to get container status \"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8\": rpc error: code = NotFound desc = could not find container \"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8\": container with ID starting with 24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8 not found: ID does not exist" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.828800 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" path="/var/lib/kubelet/pods/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019/volumes" Feb 02 10:52:43 crc kubenswrapper[4782]: I0202 10:52:43.436651 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.951560 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.952203 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.952277 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.952968 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.953039 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c" gracePeriod=600 Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.454809 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c" exitCode=0 Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.454906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c"} Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.455158 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1"} Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.455191 4782 scope.go:117] "RemoveContainer" containerID="f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.041710 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.698797 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72"] Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.699226 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-utilities" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699248 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-utilities" Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.699263 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699271 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.699286 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-content" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699294 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-content" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699439 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.700159 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.703612 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.703756 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7njp5" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.710879 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-s297l"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.714441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.719957 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.722854 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.714711 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830092 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr69b\" (UniqueName: \"kubernetes.io/projected/a3b12ebe-32d3-4d07-b723-64cd83951d38-kube-api-access-qr69b\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-startup\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830506 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-conf\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830583 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-sockets\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830657 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-reloader\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830686 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jkp\" (UniqueName: \"kubernetes.io/projected/ef8673fb-6fdf-4c32-a573-3583f4188d97-kube-api-access-76jkp\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830706 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830908 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics-certs\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.847447 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-w7rg8"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.848602 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.851741 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.852020 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.852194 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.852338 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2dvzx" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.863916 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-wxfg2"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.864998 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.866863 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.881709 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wxfg2"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr69b\" (UniqueName: \"kubernetes.io/projected/a3b12ebe-32d3-4d07-b723-64cd83951d38-kube-api-access-qr69b\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-conf\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-startup\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932542 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932559 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-sockets\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932584 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-reloader\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932615 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76jkp\" (UniqueName: \"kubernetes.io/projected/ef8673fb-6fdf-4c32-a573-3583f4188d97-kube-api-access-76jkp\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932635 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932742 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics-certs\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933125 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-reloader\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.933209 4782 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.933245 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert podName:a3b12ebe-32d3-4d07-b723-64cd83951d38 nodeName:}" failed. No retries permitted until 2026-02-02 10:53:04.433233971 +0000 UTC m=+864.317426687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert") pod "frr-k8s-webhook-server-7df86c4f6c-8zl72" (UID: "a3b12ebe-32d3-4d07-b723-64cd83951d38") : secret "frr-k8s-webhook-server-cert" not found Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-conf\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933794 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-startup\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933964 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-sockets\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.939053 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics-certs\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.960422 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr69b\" (UniqueName: \"kubernetes.io/projected/a3b12ebe-32d3-4d07-b723-64cd83951d38-kube-api-access-qr69b\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.960740 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76jkp\" (UniqueName: \"kubernetes.io/projected/ef8673fb-6fdf-4c32-a573-3583f4188d97-kube-api-access-76jkp\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-metrics-certs\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034555 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-cert\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hnl\" (UniqueName: \"kubernetes.io/projected/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-kube-api-access-x8hnl\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034712 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhtjt\" (UniqueName: \"kubernetes.io/projected/7dcb22a8-d257-446a-8264-63b33c40e24a-kube-api-access-vhtjt\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034738 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-metrics-certs\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034793 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dcb22a8-d257-446a-8264-63b33c40e24a-metallb-excludel2\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.089106 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hnl\" (UniqueName: \"kubernetes.io/projected/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-kube-api-access-x8hnl\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136206 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhtjt\" (UniqueName: \"kubernetes.io/projected/7dcb22a8-d257-446a-8264-63b33c40e24a-kube-api-access-vhtjt\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-metrics-certs\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136257 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dcb22a8-d257-446a-8264-63b33c40e24a-metallb-excludel2\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136315 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-cert\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-metrics-certs\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.136550 4782 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.136707 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist podName:7dcb22a8-d257-446a-8264-63b33c40e24a nodeName:}" failed. No retries permitted until 2026-02-02 10:53:04.6366256 +0000 UTC m=+864.520818396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist") pod "speaker-w7rg8" (UID: "7dcb22a8-d257-446a-8264-63b33c40e24a") : secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.137510 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dcb22a8-d257-446a-8264-63b33c40e24a-metallb-excludel2\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.140762 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.141811 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-metrics-certs\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.142533 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-metrics-certs\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.150593 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-cert\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.158365 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hnl\" (UniqueName: \"kubernetes.io/projected/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-kube-api-access-x8hnl\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.164795 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhtjt\" (UniqueName: \"kubernetes.io/projected/7dcb22a8-d257-446a-8264-63b33c40e24a-kube-api-access-vhtjt\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.209402 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.441927 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.445489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.480226 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wxfg2"] Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.511051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"bd58aca5c60cbeb8f481ade1c282f404bf53e5c8cb2d15cf8058bef84f1330da"} Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.512184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wxfg2" event={"ID":"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a","Type":"ContainerStarted","Data":"fc0bcef6df83895a2f9a40e9bbe2eeab8ae8639023d878ee236eb87a57572d5f"} Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.645304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.645752 4782 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.645804 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist podName:7dcb22a8-d257-446a-8264-63b33c40e24a nodeName:}" failed. No retries permitted until 2026-02-02 10:53:05.645788818 +0000 UTC m=+865.529981534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist") pod "speaker-w7rg8" (UID: "7dcb22a8-d257-446a-8264-63b33c40e24a") : secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.676216 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.935515 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72"] Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.526110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wxfg2" event={"ID":"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a","Type":"ContainerStarted","Data":"9090f9af60ff0b73b1f47cea23c88281f6c81117878b3df1dd43d140724ae790"} Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.526157 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wxfg2" event={"ID":"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a","Type":"ContainerStarted","Data":"4d3c622497c18e1835809575e4ff3791857ca302664eeb9ef1c7ab90c445e948"} Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.526252 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.527311 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" event={"ID":"a3b12ebe-32d3-4d07-b723-64cd83951d38","Type":"ContainerStarted","Data":"45386974435adf5dcf0051e07b1aee4b910803f51ecdcef62b9953933c9740bb"} Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.549488 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-wxfg2" podStartSLOduration=2.549467935 podStartE2EDuration="2.549467935s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:05.545082379 +0000 UTC m=+865.429275105" watchObservedRunningTime="2026-02-02 10:53:05.549467935 +0000 UTC m=+865.433660661" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.661851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.670229 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.683567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540103 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w7rg8" event={"ID":"7dcb22a8-d257-446a-8264-63b33c40e24a","Type":"ContainerStarted","Data":"3882d369caecd00974a1599da76dd7132434be43ec20af6d88ae8db114757189"} Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540535 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w7rg8" event={"ID":"7dcb22a8-d257-446a-8264-63b33c40e24a","Type":"ContainerStarted","Data":"67a31c2c3509d3a5e446d2298904f632314777d8dc00e80deb0249f741f7f1a6"} Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540555 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w7rg8" event={"ID":"7dcb22a8-d257-446a-8264-63b33c40e24a","Type":"ContainerStarted","Data":"ddb46dc6995313facc0b90babb21760431d879ba08ce7922996f41def4a3af89"} Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540798 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.566584 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-w7rg8" podStartSLOduration=3.566568397 podStartE2EDuration="3.566568397s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:06.562869061 +0000 UTC m=+866.447061777" watchObservedRunningTime="2026-02-02 10:53:06.566568397 +0000 UTC m=+866.450761113" Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.227693 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.597111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" event={"ID":"a3b12ebe-32d3-4d07-b723-64cd83951d38","Type":"ContainerStarted","Data":"0ca37a8595781ef4eb014ca286c2f73e69d30bb8ca6948d9e0090d418990bb2f"} Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.597459 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.598804 4782 generic.go:334] "Generic (PLEG): container finished" podID="ef8673fb-6fdf-4c32-a573-3583f4188d97" containerID="d1f9150e4bd83cb9a3718e0687d175a4d21e3aa52005eb523790d22630b5d499" exitCode=0 Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.598834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerDied","Data":"d1f9150e4bd83cb9a3718e0687d175a4d21e3aa52005eb523790d22630b5d499"} Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.621208 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" podStartSLOduration=3.180896781 podStartE2EDuration="11.621193149s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="2026-02-02 10:53:04.944896186 +0000 UTC m=+864.829088902" lastFinishedPulling="2026-02-02 10:53:13.385192554 +0000 UTC m=+873.269385270" observedRunningTime="2026-02-02 10:53:14.619507251 +0000 UTC m=+874.503699967" watchObservedRunningTime="2026-02-02 10:53:14.621193149 +0000 UTC m=+874.505385865" Feb 02 10:53:15 crc kubenswrapper[4782]: I0202 10:53:15.609152 4782 generic.go:334] "Generic (PLEG): container finished" podID="ef8673fb-6fdf-4c32-a573-3583f4188d97" containerID="4015ecf5aa26633862cb50b9b7b5f3e73dc6ee9dccd604b439998091b5317ad7" exitCode=0 Feb 02 10:53:15 crc kubenswrapper[4782]: I0202 10:53:15.609264 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerDied","Data":"4015ecf5aa26633862cb50b9b7b5f3e73dc6ee9dccd604b439998091b5317ad7"} Feb 02 10:53:16 crc kubenswrapper[4782]: I0202 10:53:16.619172 4782 generic.go:334] "Generic (PLEG): container finished" podID="ef8673fb-6fdf-4c32-a573-3583f4188d97" containerID="2ce571ef64fef7e6faff138d09b8ddc6e3b7b4ee44f43656754ad95f6dfc069d" exitCode=0 Feb 02 10:53:16 crc kubenswrapper[4782]: I0202 10:53:16.619237 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerDied","Data":"2ce571ef64fef7e6faff138d09b8ddc6e3b7b4ee44f43656754ad95f6dfc069d"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.632660 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"a3f361566a0cde9c3882d2ca7792f0f1f6dd6e4c6ca0e42f5bfa089290a31758"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633626 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"77bd5259dcef66dd87009eb43baac72998e6af4f1526263eb371738f4ba447cb"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"ba40206599728e80bb17f434c23a30c29fb8dbadd208629a8a4f8cb6f5e4a343"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633722 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"60bec9446b5c0e34da3c90f2e6b4d5f4998a35ffdc28186c4b7af951e888b3c9"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633732 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"35b165ebe6f4073c72257573d1248bf09d283efb9efd34809751b0a28bada6bd"} Feb 02 10:53:18 crc kubenswrapper[4782]: I0202 10:53:18.646973 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"dde157fea63d879fe418c40328ef4bf1203698dacef0bbf2345db5f1e746c48d"} Feb 02 10:53:18 crc kubenswrapper[4782]: I0202 10:53:18.647999 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:18 crc kubenswrapper[4782]: I0202 10:53:18.671210 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-s297l" podStartSLOduration=6.458440501 podStartE2EDuration="15.67119048s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="2026-02-02 10:53:04.216831745 +0000 UTC m=+864.101024451" lastFinishedPulling="2026-02-02 10:53:13.429581714 +0000 UTC m=+873.313774430" observedRunningTime="2026-02-02 10:53:18.669152191 +0000 UTC m=+878.553344927" watchObservedRunningTime="2026-02-02 10:53:18.67119048 +0000 UTC m=+878.555383206" Feb 02 10:53:19 crc kubenswrapper[4782]: I0202 10:53:19.089540 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:19 crc kubenswrapper[4782]: I0202 10:53:19.133083 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:24 crc kubenswrapper[4782]: I0202 10:53:24.690042 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:25 crc kubenswrapper[4782]: I0202 10:53:25.692541 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.836417 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ml428"] Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.839260 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.841324 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.841590 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fpgp6" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.843434 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.858998 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ml428"] Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.972052 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nzvv\" (UniqueName: \"kubernetes.io/projected/504a2863-da7c-4a03-b973-0f687ca20746-kube-api-access-4nzvv\") pod \"openstack-operator-index-ml428\" (UID: \"504a2863-da7c-4a03-b973-0f687ca20746\") " pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.073315 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nzvv\" (UniqueName: \"kubernetes.io/projected/504a2863-da7c-4a03-b973-0f687ca20746-kube-api-access-4nzvv\") pod \"openstack-operator-index-ml428\" (UID: \"504a2863-da7c-4a03-b973-0f687ca20746\") " pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.096088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nzvv\" (UniqueName: \"kubernetes.io/projected/504a2863-da7c-4a03-b973-0f687ca20746-kube-api-access-4nzvv\") pod \"openstack-operator-index-ml428\" (UID: \"504a2863-da7c-4a03-b973-0f687ca20746\") " pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.185895 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.637056 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ml428"] Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.729294 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ml428" event={"ID":"504a2863-da7c-4a03-b973-0f687ca20746","Type":"ContainerStarted","Data":"7a527c6ac7c0e6b201896da98324705c08623625c01be9905c43359c6018808a"} Feb 02 10:53:34 crc kubenswrapper[4782]: I0202 10:53:34.091948 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:35 crc kubenswrapper[4782]: I0202 10:53:35.748901 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ml428" event={"ID":"504a2863-da7c-4a03-b973-0f687ca20746","Type":"ContainerStarted","Data":"ac975d33c75f19f9be9f4fd04ddbf772b3a6a045fe9eb2985335e19bc898cfa8"} Feb 02 10:53:35 crc kubenswrapper[4782]: I0202 10:53:35.763750 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ml428" podStartSLOduration=2.197265886 podStartE2EDuration="4.763721738s" podCreationTimestamp="2026-02-02 10:53:31 +0000 UTC" firstStartedPulling="2026-02-02 10:53:32.645245891 +0000 UTC m=+892.529438627" lastFinishedPulling="2026-02-02 10:53:35.211701753 +0000 UTC m=+895.095894479" observedRunningTime="2026-02-02 10:53:35.76203418 +0000 UTC m=+895.646226906" watchObservedRunningTime="2026-02-02 10:53:35.763721738 +0000 UTC m=+895.647914474" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.186269 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.186687 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.214572 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.834129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.283247 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh"] Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.284659 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.295474 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xgqcq" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.298121 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh"] Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.345856 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.346396 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.346433 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.448326 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.448502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.448625 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.449041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.449313 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.467716 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.616929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.871146 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh"] Feb 02 10:53:45 crc kubenswrapper[4782]: I0202 10:53:45.816299 4782 generic.go:334] "Generic (PLEG): container finished" podID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerID="0f4bd2528c24be8b50d21847eaec3477c04c302ad2bba04bb4b9c1eb7fa8ad6f" exitCode=0 Feb 02 10:53:45 crc kubenswrapper[4782]: I0202 10:53:45.816508 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"0f4bd2528c24be8b50d21847eaec3477c04c302ad2bba04bb4b9c1eb7fa8ad6f"} Feb 02 10:53:45 crc kubenswrapper[4782]: I0202 10:53:45.817169 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerStarted","Data":"2c97412fb26337a0ccfa4e88ca937f911bb93e2ca588bdd44449119803a07801"} Feb 02 10:53:46 crc kubenswrapper[4782]: I0202 10:53:46.826850 4782 generic.go:334] "Generic (PLEG): container finished" podID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerID="026e19d34ba2d799042f63db2424aea9e4f5f07a6a4103945ac8d0baa8e1ab2a" exitCode=0 Feb 02 10:53:46 crc kubenswrapper[4782]: I0202 10:53:46.830376 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"026e19d34ba2d799042f63db2424aea9e4f5f07a6a4103945ac8d0baa8e1ab2a"} Feb 02 10:53:47 crc kubenswrapper[4782]: I0202 10:53:47.849279 4782 generic.go:334] "Generic (PLEG): container finished" podID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerID="8dfbe940e50b058cf629bb46e1067b5e84ce8d5159ffe85d7b7c01b767a0aa84" exitCode=0 Feb 02 10:53:47 crc kubenswrapper[4782]: I0202 10:53:47.849358 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"8dfbe940e50b058cf629bb46e1067b5e84ce8d5159ffe85d7b7c01b767a0aa84"} Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.088258 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.215748 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"120b307b-b163-4e00-be79-cacf3e7e84e1\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.215817 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"120b307b-b163-4e00-be79-cacf3e7e84e1\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.215866 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"120b307b-b163-4e00-be79-cacf3e7e84e1\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.216940 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle" (OuterVolumeSpecName: "bundle") pod "120b307b-b163-4e00-be79-cacf3e7e84e1" (UID: "120b307b-b163-4e00-be79-cacf3e7e84e1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.222085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg" (OuterVolumeSpecName: "kube-api-access-zz5lg") pod "120b307b-b163-4e00-be79-cacf3e7e84e1" (UID: "120b307b-b163-4e00-be79-cacf3e7e84e1"). InnerVolumeSpecName "kube-api-access-zz5lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.236798 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util" (OuterVolumeSpecName: "util") pod "120b307b-b163-4e00-be79-cacf3e7e84e1" (UID: "120b307b-b163-4e00-be79-cacf3e7e84e1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.317664 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.317916 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.317985 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.862832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"2c97412fb26337a0ccfa4e88ca937f911bb93e2ca588bdd44449119803a07801"} Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.862869 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c97412fb26337a0ccfa4e88ca937f911bb93e2ca588bdd44449119803a07801" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.862979 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.691607 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m"] Feb 02 10:53:53 crc kubenswrapper[4782]: E0202 10:53:53.692209 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="pull" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692224 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="pull" Feb 02 10:53:53 crc kubenswrapper[4782]: E0202 10:53:53.692245 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="extract" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692252 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="extract" Feb 02 10:53:53 crc kubenswrapper[4782]: E0202 10:53:53.692267 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="util" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692274 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="util" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692428 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="extract" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692895 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.696166 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wnscv" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.776703 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lzhz\" (UniqueName: \"kubernetes.io/projected/c12a72da-af7d-4f2e-b15d-bb90fa6bd818-kube-api-access-5lzhz\") pod \"openstack-operator-controller-init-68b945c8c7-jwf5m\" (UID: \"c12a72da-af7d-4f2e-b15d-bb90fa6bd818\") " pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.800279 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m"] Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.878181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lzhz\" (UniqueName: \"kubernetes.io/projected/c12a72da-af7d-4f2e-b15d-bb90fa6bd818-kube-api-access-5lzhz\") pod \"openstack-operator-controller-init-68b945c8c7-jwf5m\" (UID: \"c12a72da-af7d-4f2e-b15d-bb90fa6bd818\") " pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.918315 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lzhz\" (UniqueName: \"kubernetes.io/projected/c12a72da-af7d-4f2e-b15d-bb90fa6bd818-kube-api-access-5lzhz\") pod \"openstack-operator-controller-init-68b945c8c7-jwf5m\" (UID: \"c12a72da-af7d-4f2e-b15d-bb90fa6bd818\") " pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:54 crc kubenswrapper[4782]: I0202 10:53:54.009613 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:54 crc kubenswrapper[4782]: I0202 10:53:54.363656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m"] Feb 02 10:53:54 crc kubenswrapper[4782]: I0202 10:53:54.899118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" event={"ID":"c12a72da-af7d-4f2e-b15d-bb90fa6bd818","Type":"ContainerStarted","Data":"bf806aa920ee8763ac21b423ddb02d614044fdabae3502ff8c129ee6a13bf39a"} Feb 02 10:54:00 crc kubenswrapper[4782]: I0202 10:54:00.955965 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" event={"ID":"c12a72da-af7d-4f2e-b15d-bb90fa6bd818","Type":"ContainerStarted","Data":"4f690700cb253cbec709c144d0fd093081f25eb893cb99a3f1a19c5f737ccbe8"} Feb 02 10:54:00 crc kubenswrapper[4782]: I0202 10:54:00.957392 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:54:00 crc kubenswrapper[4782]: I0202 10:54:00.988202 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" podStartSLOduration=2.430693278 podStartE2EDuration="7.988181711s" podCreationTimestamp="2026-02-02 10:53:53 +0000 UTC" firstStartedPulling="2026-02-02 10:53:54.37318804 +0000 UTC m=+914.257380756" lastFinishedPulling="2026-02-02 10:53:59.930676483 +0000 UTC m=+919.814869189" observedRunningTime="2026-02-02 10:54:00.981710096 +0000 UTC m=+920.865902822" watchObservedRunningTime="2026-02-02 10:54:00.988181711 +0000 UTC m=+920.872374427" Feb 02 10:54:14 crc kubenswrapper[4782]: I0202 10:54:14.012196 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.785084 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.786606 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.788870 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pz896" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.800692 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.801441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.804082 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-85x2p" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.807313 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.815915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgvh\" (UniqueName: \"kubernetes.io/projected/0aa487d3-a703-4ed6-a44c-bc40eb8272ce-kube-api-access-tfgvh\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5ngrn\" (UID: \"0aa487d3-a703-4ed6-a44c-bc40eb8272ce\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.819770 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.820900 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.828020 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tmxnj" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.833871 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.854001 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.854840 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.858043 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sz9fm" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.864548 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.890462 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.891445 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.895827 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2n6rr" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.903295 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919410 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbb7\" (UniqueName: \"kubernetes.io/projected/9ba082c6-4f91-48d6-b5ec-198f46abc135-kube-api-access-5sbb7\") pod \"designate-operator-controller-manager-6d9697b7f4-5vj4j\" (UID: \"9ba082c6-4f91-48d6-b5ec-198f46abc135\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7298\" (UniqueName: \"kubernetes.io/projected/bfafd643-4798-4519-934d-8ec3e2e677d9-kube-api-access-g7298\") pod \"cinder-operator-controller-manager-8d874c8fc-vj4sh\" (UID: \"bfafd643-4798-4519-934d-8ec3e2e677d9\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919664 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh95q\" (UniqueName: \"kubernetes.io/projected/7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7-kube-api-access-kh95q\") pod \"heat-operator-controller-manager-69d6db494d-fkwh5\" (UID: \"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919700 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq2js\" (UniqueName: \"kubernetes.io/projected/b03fe987-deab-47e7-829a-b822ab061f20-kube-api-access-cq2js\") pod \"glance-operator-controller-manager-8886f4c47-v7tzl\" (UID: \"b03fe987-deab-47e7-829a-b822ab061f20\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgvh\" (UniqueName: \"kubernetes.io/projected/0aa487d3-a703-4ed6-a44c-bc40eb8272ce-kube-api-access-tfgvh\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5ngrn\" (UID: \"0aa487d3-a703-4ed6-a44c-bc40eb8272ce\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.921790 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.922492 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.932778 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-67xzx" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.939068 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.952583 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.972775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgvh\" (UniqueName: \"kubernetes.io/projected/0aa487d3-a703-4ed6-a44c-bc40eb8272ce-kube-api-access-tfgvh\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5ngrn\" (UID: \"0aa487d3-a703-4ed6-a44c-bc40eb8272ce\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.991548 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.992302 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.996178 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-n6j25" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.007442 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.008248 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.018866 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.019061 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2qdwh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020186 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh95q\" (UniqueName: \"kubernetes.io/projected/7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7-kube-api-access-kh95q\") pod \"heat-operator-controller-manager-69d6db494d-fkwh5\" (UID: \"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020227 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6b4\" (UniqueName: \"kubernetes.io/projected/224f30b2-1084-4934-8d06-67975a9776ad-kube-api-access-fm6b4\") pod \"horizon-operator-controller-manager-5fb775575f-7z5k7\" (UID: \"224f30b2-1084-4934-8d06-67975a9776ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq2js\" (UniqueName: \"kubernetes.io/projected/b03fe987-deab-47e7-829a-b822ab061f20-kube-api-access-cq2js\") pod \"glance-operator-controller-manager-8886f4c47-v7tzl\" (UID: \"b03fe987-deab-47e7-829a-b822ab061f20\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020295 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbb7\" (UniqueName: \"kubernetes.io/projected/9ba082c6-4f91-48d6-b5ec-198f46abc135-kube-api-access-5sbb7\") pod \"designate-operator-controller-manager-6d9697b7f4-5vj4j\" (UID: \"9ba082c6-4f91-48d6-b5ec-198f46abc135\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7298\" (UniqueName: \"kubernetes.io/projected/bfafd643-4798-4519-934d-8ec3e2e677d9-kube-api-access-g7298\") pod \"cinder-operator-controller-manager-8d874c8fc-vj4sh\" (UID: \"bfafd643-4798-4519-934d-8ec3e2e677d9\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2z8x\" (UniqueName: \"kubernetes.io/projected/6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27-kube-api-access-p2z8x\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v94dv\" (UID: \"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.026922 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.027903 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.044951 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.062155 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qksl6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.066891 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbb7\" (UniqueName: \"kubernetes.io/projected/9ba082c6-4f91-48d6-b5ec-198f46abc135-kube-api-access-5sbb7\") pod \"designate-operator-controller-manager-6d9697b7f4-5vj4j\" (UID: \"9ba082c6-4f91-48d6-b5ec-198f46abc135\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.066970 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.096241 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh95q\" (UniqueName: \"kubernetes.io/projected/7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7-kube-api-access-kh95q\") pod \"heat-operator-controller-manager-69d6db494d-fkwh5\" (UID: \"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.098013 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.099306 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.105019 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq2js\" (UniqueName: \"kubernetes.io/projected/b03fe987-deab-47e7-829a-b822ab061f20-kube-api-access-cq2js\") pod \"glance-operator-controller-manager-8886f4c47-v7tzl\" (UID: \"b03fe987-deab-47e7-829a-b822ab061f20\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.109311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7298\" (UniqueName: \"kubernetes.io/projected/bfafd643-4798-4519-934d-8ec3e2e677d9-kube-api-access-g7298\") pod \"cinder-operator-controller-manager-8d874c8fc-vj4sh\" (UID: \"bfafd643-4798-4519-934d-8ec3e2e677d9\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.109669 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5tgth" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.118357 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121166 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4j9k\" (UniqueName: \"kubernetes.io/projected/3624e93f-9208-4f82-9f55-12381a637262-kube-api-access-s4j9k\") pod \"mariadb-operator-controller-manager-67bf948998-n88d6\" (UID: \"3624e93f-9208-4f82-9f55-12381a637262\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121248 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6b4\" (UniqueName: \"kubernetes.io/projected/224f30b2-1084-4934-8d06-67975a9776ad-kube-api-access-fm6b4\") pod \"horizon-operator-controller-manager-5fb775575f-7z5k7\" (UID: \"224f30b2-1084-4934-8d06-67975a9776ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121324 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121422 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2j9r\" (UniqueName: \"kubernetes.io/projected/009bc68d-5c70-42ca-9008-152206fd954d-kube-api-access-n2j9r\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121570 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6dsh\" (UniqueName: \"kubernetes.io/projected/6b276ac2-533f-43c9-94a1-f0d0e4eb6993-kube-api-access-m6dsh\") pod \"keystone-operator-controller-manager-84f48565d4-w7gld\" (UID: \"6b276ac2-533f-43c9-94a1-f0d0e4eb6993\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2z8x\" (UniqueName: \"kubernetes.io/projected/6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27-kube-api-access-p2z8x\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v94dv\" (UID: \"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.133183 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.144881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.147800 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.158994 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.159950 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.177629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.183706 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.207676 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p6wdr" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.222846 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2z8x\" (UniqueName: \"kubernetes.io/projected/6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27-kube-api-access-p2z8x\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v94dv\" (UID: \"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223373 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2j9r\" (UniqueName: \"kubernetes.io/projected/009bc68d-5c70-42ca-9008-152206fd954d-kube-api-access-n2j9r\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223769 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6dsh\" (UniqueName: \"kubernetes.io/projected/6b276ac2-533f-43c9-94a1-f0d0e4eb6993-kube-api-access-m6dsh\") pod \"keystone-operator-controller-manager-84f48565d4-w7gld\" (UID: \"6b276ac2-533f-43c9-94a1-f0d0e4eb6993\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223813 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4j9k\" (UniqueName: \"kubernetes.io/projected/3624e93f-9208-4f82-9f55-12381a637262-kube-api-access-s4j9k\") pod \"mariadb-operator-controller-manager-67bf948998-n88d6\" (UID: \"3624e93f-9208-4f82-9f55-12381a637262\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223852 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.224006 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.224049 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:34.724036339 +0000 UTC m=+954.608229055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.271489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6b4\" (UniqueName: \"kubernetes.io/projected/224f30b2-1084-4934-8d06-67975a9776ad-kube-api-access-fm6b4\") pod \"horizon-operator-controller-manager-5fb775575f-7z5k7\" (UID: \"224f30b2-1084-4934-8d06-67975a9776ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.273808 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.313410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2j9r\" (UniqueName: \"kubernetes.io/projected/009bc68d-5c70-42ca-9008-152206fd954d-kube-api-access-n2j9r\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.316477 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6dsh\" (UniqueName: \"kubernetes.io/projected/6b276ac2-533f-43c9-94a1-f0d0e4eb6993-kube-api-access-m6dsh\") pod \"keystone-operator-controller-manager-84f48565d4-w7gld\" (UID: \"6b276ac2-533f-43c9-94a1-f0d0e4eb6993\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.320282 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.326018 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm4l\" (UniqueName: \"kubernetes.io/projected/f44c1b55-d189-42dd-9187-90d9e0713790-kube-api-access-dlm4l\") pod \"manila-operator-controller-manager-7dd968899f-scr7v\" (UID: \"f44c1b55-d189-42dd-9187-90d9e0713790\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.399317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4j9k\" (UniqueName: \"kubernetes.io/projected/3624e93f-9208-4f82-9f55-12381a637262-kube-api-access-s4j9k\") pod \"mariadb-operator-controller-manager-67bf948998-n88d6\" (UID: \"3624e93f-9208-4f82-9f55-12381a637262\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.436710 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.437496 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.447803 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r8g5n" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.447997 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.448746 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.459056 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cksq6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.459279 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.461430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlm4l\" (UniqueName: \"kubernetes.io/projected/f44c1b55-d189-42dd-9187-90d9e0713790-kube-api-access-dlm4l\") pod \"manila-operator-controller-manager-7dd968899f-scr7v\" (UID: \"f44c1b55-d189-42dd-9187-90d9e0713790\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.469739 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.490570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlm4l\" (UniqueName: \"kubernetes.io/projected/f44c1b55-d189-42dd-9187-90d9e0713790-kube-api-access-dlm4l\") pod \"manila-operator-controller-manager-7dd968899f-scr7v\" (UID: \"f44c1b55-d189-42dd-9187-90d9e0713790\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.516384 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.519417 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.536743 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fnsz7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.538571 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.549803 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.553689 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.554909 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.560345 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.563007 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pbb\" (UniqueName: \"kubernetes.io/projected/216a79cc-1b33-43f7-81ff-400a3b6f3d00-kube-api-access-p6pbb\") pod \"neutron-operator-controller-manager-585dbc889-l9q78\" (UID: \"216a79cc-1b33-43f7-81ff-400a3b6f3d00\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.563501 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg46x\" (UniqueName: \"kubernetes.io/projected/ab3a96ec-3e51-4147-9a58-6596f2c3ad5c-kube-api-access-wg46x\") pod \"nova-operator-controller-manager-55bff696bd-v8zfh\" (UID: \"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.577116 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.588955 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5szrt" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.597115 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.598463 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.623412 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.643893 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.644784 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.649298 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zj447" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664377 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ctp\" (UniqueName: \"kubernetes.io/projected/6c7ac81b-49d3-493d-a794-1cffe78eba5e-kube-api-access-89ctp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664450 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg46x\" (UniqueName: \"kubernetes.io/projected/ab3a96ec-3e51-4147-9a58-6596f2c3ad5c-kube-api-access-wg46x\") pod \"nova-operator-controller-manager-55bff696bd-v8zfh\" (UID: \"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664510 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6pbb\" (UniqueName: \"kubernetes.io/projected/216a79cc-1b33-43f7-81ff-400a3b6f3d00-kube-api-access-p6pbb\") pod \"neutron-operator-controller-manager-585dbc889-l9q78\" (UID: \"216a79cc-1b33-43f7-81ff-400a3b6f3d00\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664528 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xr5s\" (UniqueName: \"kubernetes.io/projected/7e19a281-abaa-462e-abc7-add4acff7865-kube-api-access-6xr5s\") pod \"octavia-operator-controller-manager-6687f8d877-r9dkb\" (UID: \"7e19a281-abaa-462e-abc7-add4acff7865\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664548 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.703083 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6pbb\" (UniqueName: \"kubernetes.io/projected/216a79cc-1b33-43f7-81ff-400a3b6f3d00-kube-api-access-p6pbb\") pod \"neutron-operator-controller-manager-585dbc889-l9q78\" (UID: \"216a79cc-1b33-43f7-81ff-400a3b6f3d00\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.723564 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg46x\" (UniqueName: \"kubernetes.io/projected/ab3a96ec-3e51-4147-9a58-6596f2c3ad5c-kube-api-access-wg46x\") pod \"nova-operator-controller-manager-55bff696bd-v8zfh\" (UID: \"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.734901 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.753861 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.754621 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.763110 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.764101 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.772285 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-r9w5d" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.772821 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9b45h" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.773751 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xr5s\" (UniqueName: \"kubernetes.io/projected/7e19a281-abaa-462e-abc7-add4acff7865-kube-api-access-6xr5s\") pod \"octavia-operator-controller-manager-6687f8d877-r9dkb\" (UID: \"7e19a281-abaa-462e-abc7-add4acff7865\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.773884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.774021 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ctp\" (UniqueName: \"kubernetes.io/projected/6c7ac81b-49d3-493d-a794-1cffe78eba5e-kube-api-access-89ctp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.774193 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvgr\" (UniqueName: \"kubernetes.io/projected/2f8b3b48-0c03-4922-8966-a3aaca8ebce3-kube-api-access-fcvgr\") pod \"ovn-operator-controller-manager-788c46999f-9ls2x\" (UID: \"2f8b3b48-0c03-4922-8966-a3aaca8ebce3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.774337 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.774567 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.775570 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.775659 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:35.275628381 +0000 UTC m=+955.159821097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.777975 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:35.777936497 +0000 UTC m=+955.662129283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.780226 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.790701 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.800072 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.806003 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.807634 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.816551 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ctp\" (UniqueName: \"kubernetes.io/projected/6c7ac81b-49d3-493d-a794-1cffe78eba5e-kube-api-access-89ctp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.823181 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z8frt" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.827138 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.864395 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.865520 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-k7t28"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.869025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.871762 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.877434 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4l7w\" (UniqueName: \"kubernetes.io/projected/1661d177-41b5-4df5-886f-f3cb7abd1047-kube-api-access-c4l7w\") pod \"swift-operator-controller-manager-68fc8c869-xnzl4\" (UID: \"1661d177-41b5-4df5-886f-f3cb7abd1047\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.877694 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8bj\" (UniqueName: \"kubernetes.io/projected/6ac6c6b4-9123-4c39-b26f-b07880c1a6c6-kube-api-access-zh8bj\") pod \"placement-operator-controller-manager-5b964cf4cd-dmncd\" (UID: \"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.877899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcvgr\" (UniqueName: \"kubernetes.io/projected/2f8b3b48-0c03-4922-8966-a3aaca8ebce3-kube-api-access-fcvgr\") pod \"ovn-operator-controller-manager-788c46999f-9ls2x\" (UID: \"2f8b3b48-0c03-4922-8966-a3aaca8ebce3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.878424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xr5s\" (UniqueName: \"kubernetes.io/projected/7e19a281-abaa-462e-abc7-add4acff7865-kube-api-access-6xr5s\") pod \"octavia-operator-controller-manager-6687f8d877-r9dkb\" (UID: \"7e19a281-abaa-462e-abc7-add4acff7865\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.885052 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nsztx" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.885327 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5zkfp" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.893721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.905292 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.919177 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcvgr\" (UniqueName: \"kubernetes.io/projected/2f8b3b48-0c03-4922-8966-a3aaca8ebce3-kube-api-access-fcvgr\") pod \"ovn-operator-controller-manager-788c46999f-9ls2x\" (UID: \"2f8b3b48-0c03-4922-8966-a3aaca8ebce3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.923058 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.987873 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvk7\" (UniqueName: \"kubernetes.io/projected/0fd2f609-78f1-4f82-b405-35b5312baf0d-kube-api-access-bhvk7\") pod \"test-operator-controller-manager-56f8bfcd9f-82nk8\" (UID: \"0fd2f609-78f1-4f82-b405-35b5312baf0d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.987929 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hgx\" (UniqueName: \"kubernetes.io/projected/127c9a45-7187-4afb-bb45-c34a45e67e4e-kube-api-access-96hgx\") pod \"watcher-operator-controller-manager-564965969-k7t28\" (UID: \"127c9a45-7187-4afb-bb45-c34a45e67e4e\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.987978 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4l7w\" (UniqueName: \"kubernetes.io/projected/1661d177-41b5-4df5-886f-f3cb7abd1047-kube-api-access-c4l7w\") pod \"swift-operator-controller-manager-68fc8c869-xnzl4\" (UID: \"1661d177-41b5-4df5-886f-f3cb7abd1047\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.992792 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh8bj\" (UniqueName: \"kubernetes.io/projected/6ac6c6b4-9123-4c39-b26f-b07880c1a6c6-kube-api-access-zh8bj\") pod \"placement-operator-controller-manager-5b964cf4cd-dmncd\" (UID: \"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.992934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c24q\" (UniqueName: \"kubernetes.io/projected/c617a97c-fec4-418c-818a-250919ea6882-kube-api-access-6c24q\") pod \"telemetry-operator-controller-manager-64b5b76f97-ckl5m\" (UID: \"c617a97c-fec4-418c-818a-250919ea6882\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.045058 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-k7t28"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.050135 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4l7w\" (UniqueName: \"kubernetes.io/projected/1661d177-41b5-4df5-886f-f3cb7abd1047-kube-api-access-c4l7w\") pod \"swift-operator-controller-manager-68fc8c869-xnzl4\" (UID: \"1661d177-41b5-4df5-886f-f3cb7abd1047\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.085105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh8bj\" (UniqueName: \"kubernetes.io/projected/6ac6c6b4-9123-4c39-b26f-b07880c1a6c6-kube-api-access-zh8bj\") pod \"placement-operator-controller-manager-5b964cf4cd-dmncd\" (UID: \"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.098436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c24q\" (UniqueName: \"kubernetes.io/projected/c617a97c-fec4-418c-818a-250919ea6882-kube-api-access-6c24q\") pod \"telemetry-operator-controller-manager-64b5b76f97-ckl5m\" (UID: \"c617a97c-fec4-418c-818a-250919ea6882\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.098507 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhvk7\" (UniqueName: \"kubernetes.io/projected/0fd2f609-78f1-4f82-b405-35b5312baf0d-kube-api-access-bhvk7\") pod \"test-operator-controller-manager-56f8bfcd9f-82nk8\" (UID: \"0fd2f609-78f1-4f82-b405-35b5312baf0d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.098529 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96hgx\" (UniqueName: \"kubernetes.io/projected/127c9a45-7187-4afb-bb45-c34a45e67e4e-kube-api-access-96hgx\") pod \"watcher-operator-controller-manager-564965969-k7t28\" (UID: \"127c9a45-7187-4afb-bb45-c34a45e67e4e\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.133163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.140333 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96hgx\" (UniqueName: \"kubernetes.io/projected/127c9a45-7187-4afb-bb45-c34a45e67e4e-kube-api-access-96hgx\") pod \"watcher-operator-controller-manager-564965969-k7t28\" (UID: \"127c9a45-7187-4afb-bb45-c34a45e67e4e\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.141245 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.146285 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c24q\" (UniqueName: \"kubernetes.io/projected/c617a97c-fec4-418c-818a-250919ea6882-kube-api-access-6c24q\") pod \"telemetry-operator-controller-manager-64b5b76f97-ckl5m\" (UID: \"c617a97c-fec4-418c-818a-250919ea6882\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.154095 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.172928 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.210684 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhvk7\" (UniqueName: \"kubernetes.io/projected/0fd2f609-78f1-4f82-b405-35b5312baf0d-kube-api-access-bhvk7\") pod \"test-operator-controller-manager-56f8bfcd9f-82nk8\" (UID: \"0fd2f609-78f1-4f82-b405-35b5312baf0d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.214308 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.241465 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.344494 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.344664 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.344717 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:36.344693244 +0000 UTC m=+956.228885960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.375389 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.381269 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.389059 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.389274 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.389587 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mbtgq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.416238 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.427193 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.428819 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.437185 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.439385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7fj8h" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.457089 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548140 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548497 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548655 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nx7r\" (UniqueName: \"kubernetes.io/projected/5844bcff-6d6e-4cf4-89af-dfecfc748869-kube-api-access-9nx7r\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548690 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccmjk\" (UniqueName: \"kubernetes.io/projected/83a0d24e-3e0c-4d9a-b735-77c74ceec664-kube-api-access-ccmjk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjztq\" (UID: \"83a0d24e-3e0c-4d9a-b735-77c74ceec664\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.649909 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.649993 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nx7r\" (UniqueName: \"kubernetes.io/projected/5844bcff-6d6e-4cf4-89af-dfecfc748869-kube-api-access-9nx7r\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.650031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccmjk\" (UniqueName: \"kubernetes.io/projected/83a0d24e-3e0c-4d9a-b735-77c74ceec664-kube-api-access-ccmjk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjztq\" (UID: \"83a0d24e-3e0c-4d9a-b735-77c74ceec664\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.650112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650134 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650221 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:36.150198415 +0000 UTC m=+956.034391161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650249 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:36.150284658 +0000 UTC m=+956.034477454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.675267 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nx7r\" (UniqueName: \"kubernetes.io/projected/5844bcff-6d6e-4cf4-89af-dfecfc748869-kube-api-access-9nx7r\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.676193 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccmjk\" (UniqueName: \"kubernetes.io/projected/83a0d24e-3e0c-4d9a-b735-77c74ceec664-kube-api-access-ccmjk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjztq\" (UID: \"83a0d24e-3e0c-4d9a-b735-77c74ceec664\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.688754 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv"] Feb 02 10:54:35 crc kubenswrapper[4782]: W0202 10:54:35.718242 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a74bdcf_4aaf_4fd7_b24d_7cb1d47d1f27.slice/crio-9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8 WatchSource:0}: Error finding container 9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8: Status 404 returned error can't find the container with id 9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8 Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.772000 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.779080 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl"] Feb 02 10:54:35 crc kubenswrapper[4782]: W0202 10:54:35.795135 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb03fe987_deab_47e7_829a_b822ab061f20.slice/crio-3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b WatchSource:0}: Error finding container 3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b: Status 404 returned error can't find the container with id 3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.851820 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.852024 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.852069 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:37.852055861 +0000 UTC m=+957.736248577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.040227 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.051401 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.160197 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.160420 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.160361 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.160580 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:37.160562438 +0000 UTC m=+957.044755154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.160523 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.161112 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:37.161085053 +0000 UTC m=+957.045277769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.263231 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" event={"ID":"bfafd643-4798-4519-934d-8ec3e2e677d9","Type":"ContainerStarted","Data":"cf6bb7ee3b8ed2e620ea9ba0767292c04d917653e5004d318c2f9dc5ee752f5c"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.269214 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" event={"ID":"b03fe987-deab-47e7-829a-b822ab061f20","Type":"ContainerStarted","Data":"3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.271434 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" event={"ID":"0aa487d3-a703-4ed6-a44c-bc40eb8272ce","Type":"ContainerStarted","Data":"34da07131024ed19f1828056678064437119bf41e29b997621e545c3ae57965f"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.272561 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" event={"ID":"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27","Type":"ContainerStarted","Data":"9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.276445 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" event={"ID":"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7","Type":"ContainerStarted","Data":"9bbedd0724574344844979ca0727761925d888b96504245184405f06deb399b1"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.365944 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.366182 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.366237 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:38.366218782 +0000 UTC m=+958.250411498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.405597 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.420565 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.424343 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.444924 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.448734 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f8b3b48_0c03_4922_8966_a3aaca8ebce3.slice/crio-1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720 WatchSource:0}: Error finding container 1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720: Status 404 returned error can't find the container with id 1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.465367 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.471677 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.480736 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.514330 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod224f30b2_1084_4934_8d06_67975a9776ad.slice/crio-d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070 WatchSource:0}: Error finding container d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070: Status 404 returned error can't find the container with id d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070 Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.515873 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3624e93f_9208_4f82_9f55_12381a637262.slice/crio-ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45 WatchSource:0}: Error finding container ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45: Status 404 returned error can't find the container with id ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.518748 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.526218 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf44c1b55_d189_42dd_9187_90d9e0713790.slice/crio-72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd WatchSource:0}: Error finding container 72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd: Status 404 returned error can't find the container with id 72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.531231 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ba082c6_4f91_48d6_b5ec_198f46abc135.slice/crio-d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf WatchSource:0}: Error finding container d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf: Status 404 returned error can't find the container with id d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.533356 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab3a96ec_3e51_4147_9a58_6596f2c3ad5c.slice/crio-e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9 WatchSource:0}: Error finding container e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9: Status 404 returned error can't find the container with id e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.775408 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.799676 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc617a97c_fec4_418c_818a_250919ea6882.slice/crio-2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036 WatchSource:0}: Error finding container 2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036: Status 404 returned error can't find the container with id 2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.801830 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.809439 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216a79cc_1b33_43f7_81ff_400a3b6f3d00.slice/crio-92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634 WatchSource:0}: Error finding container 92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634: Status 404 returned error can't find the container with id 92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.812254 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-k7t28"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.815570 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1661d177_41b5_4df5_886f_f3cb7abd1047.slice/crio-931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5 WatchSource:0}: Error finding container 931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5: Status 404 returned error can't find the container with id 931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5 Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.822804 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod127c9a45_7187_4afb_bb45_c34a45e67e4e.slice/crio-0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97 WatchSource:0}: Error finding container 0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97: Status 404 returned error can't find the container with id 0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97 Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.849381 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83a0d24e_3e0c_4d9a_b735_77c74ceec664.slice/crio-7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa WatchSource:0}: Error finding container 7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa: Status 404 returned error can't find the container with id 7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.855422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.855460 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.855472 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd"] Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.855536 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96hgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-k7t28_openstack-operators(127c9a45-7187-4afb-bb45-c34a45e67e4e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.856764 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podUID="127c9a45-7187-4afb-bb45-c34a45e67e4e" Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.859416 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq"] Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.860370 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zh8bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-dmncd_openstack-operators(6ac6c6b4-9123-4c39-b26f-b07880c1a6c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.860460 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fd2f609_78f1_4f82_b405_35b5312baf0d.slice/crio-d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a WatchSource:0}: Error finding container d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a: Status 404 returned error can't find the container with id d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.861566 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podUID="6ac6c6b4-9123-4c39-b26f-b07880c1a6c6" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.869161 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccmjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jjztq_openstack-operators(83a0d24e-3e0c-4d9a-b735-77c74ceec664): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.870899 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.871303 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhvk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-82nk8_openstack-operators(0fd2f609-78f1-4f82-b405-35b5312baf0d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.872448 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podUID="0fd2f609-78f1-4f82-b405-35b5312baf0d" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.182330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.182398 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182560 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182564 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182622 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:39.182604242 +0000 UTC m=+959.066796958 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182658 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:39.182632022 +0000 UTC m=+959.066824738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.304816 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" event={"ID":"2f8b3b48-0c03-4922-8966-a3aaca8ebce3","Type":"ContainerStarted","Data":"1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.308265 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" event={"ID":"9ba082c6-4f91-48d6-b5ec-198f46abc135","Type":"ContainerStarted","Data":"d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.312531 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" event={"ID":"127c9a45-7187-4afb-bb45-c34a45e67e4e","Type":"ContainerStarted","Data":"0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.314517 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podUID="127c9a45-7187-4afb-bb45-c34a45e67e4e" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.332174 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" event={"ID":"0fd2f609-78f1-4f82-b405-35b5312baf0d","Type":"ContainerStarted","Data":"d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.334804 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podUID="0fd2f609-78f1-4f82-b405-35b5312baf0d" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.355439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" event={"ID":"f44c1b55-d189-42dd-9187-90d9e0713790","Type":"ContainerStarted","Data":"72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.368688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" event={"ID":"216a79cc-1b33-43f7-81ff-400a3b6f3d00","Type":"ContainerStarted","Data":"92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.370832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" event={"ID":"7e19a281-abaa-462e-abc7-add4acff7865","Type":"ContainerStarted","Data":"6fda1fdfe0e000cf9c1116a1a708c51087f85bbe69ff03b13e4547aadbccf774"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.377223 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" event={"ID":"1661d177-41b5-4df5-886f-f3cb7abd1047","Type":"ContainerStarted","Data":"931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.378839 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" event={"ID":"3624e93f-9208-4f82-9f55-12381a637262","Type":"ContainerStarted","Data":"ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.380346 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" event={"ID":"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c","Type":"ContainerStarted","Data":"e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.382823 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" event={"ID":"224f30b2-1084-4934-8d06-67975a9776ad","Type":"ContainerStarted","Data":"d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.388840 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" event={"ID":"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6","Type":"ContainerStarted","Data":"4f74dc5ccf6504e63e573fdadf4da0d9f398ef65b2e2bf5b9ca76ff28893b469"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.392595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" event={"ID":"c617a97c-fec4-418c-818a-250919ea6882","Type":"ContainerStarted","Data":"2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.393509 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podUID="6ac6c6b4-9123-4c39-b26f-b07880c1a6c6" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.396333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" event={"ID":"83a0d24e-3e0c-4d9a-b735-77c74ceec664","Type":"ContainerStarted","Data":"7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.400878 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.408785 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" event={"ID":"6b276ac2-533f-43c9-94a1-f0d0e4eb6993","Type":"ContainerStarted","Data":"03350220f8352457df8eaf923250d9eaa5c2f783704256a1ce36ba6564c7d2ac"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.940304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.940437 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.940580 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:41.940562508 +0000 UTC m=+961.824755224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:38 crc kubenswrapper[4782]: I0202 10:54:38.448000 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.448332 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.448405 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:42.448388608 +0000 UTC m=+962.332581324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453177 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podUID="0fd2f609-78f1-4f82-b405-35b5312baf0d" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453275 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podUID="6ac6c6b4-9123-4c39-b26f-b07880c1a6c6" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453374 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453508 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podUID="127c9a45-7187-4afb-bb45-c34a45e67e4e" Feb 02 10:54:39 crc kubenswrapper[4782]: I0202 10:54:39.265358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:39 crc kubenswrapper[4782]: I0202 10:54:39.265713 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.265830 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.265914 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.265920 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:43.265895859 +0000 UTC m=+963.150088615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.266015 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:43.265994712 +0000 UTC m=+963.150187478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: I0202 10:54:42.008408 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.008599 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.009115 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:50.009098878 +0000 UTC m=+969.893291594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: I0202 10:54:42.515031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.515213 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.515276 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:50.515260031 +0000 UTC m=+970.399452747 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: I0202 10:54:43.326536 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:43 crc kubenswrapper[4782]: I0202 10:54:43.326983 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327137 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327196 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:51.327179512 +0000 UTC m=+971.211372238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327541 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327566 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:51.327558513 +0000 UTC m=+971.211751219 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.831838 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.833934 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.843211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.895165 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.895314 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.895426 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.996678 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.996749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.996798 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.997217 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.997431 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:48 crc kubenswrapper[4782]: I0202 10:54:48.022686 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:48 crc kubenswrapper[4782]: I0202 10:54:48.166228 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.028753 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.034189 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.125965 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2qdwh" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.132846 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.535554 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.545557 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.774841 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fnsz7" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.783574 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:50 crc kubenswrapper[4782]: E0202 10:54:50.938565 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Feb 02 10:54:50 crc kubenswrapper[4782]: E0202 10:54:50.938810 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-5vj4j_openstack-operators(9ba082c6-4f91-48d6-b5ec-198f46abc135): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:50 crc kubenswrapper[4782]: E0202 10:54:50.940254 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" podUID="9ba082c6-4f91-48d6-b5ec-198f46abc135" Feb 02 10:54:51 crc kubenswrapper[4782]: I0202 10:54:51.361504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:51 crc kubenswrapper[4782]: I0202 10:54:51.361579 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361716 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361747 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361791 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:55:07.361771202 +0000 UTC m=+987.245963978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361824 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:55:07.361804463 +0000 UTC m=+987.245997179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.537547 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" podUID="9ba082c6-4f91-48d6-b5ec-198f46abc135" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.620503 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.620724 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq2js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-v7tzl_openstack-operators(b03fe987-deab-47e7-829a-b822ab061f20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.621984 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" podUID="b03fe987-deab-47e7-829a-b822ab061f20" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.224513 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.224979 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlm4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-scr7v_openstack-operators(f44c1b55-d189-42dd-9187-90d9e0713790): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.226417 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" podUID="f44c1b55-d189-42dd-9187-90d9e0713790" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.546041 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" podUID="b03fe987-deab-47e7-829a-b822ab061f20" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.550187 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" podUID="f44c1b55-d189-42dd-9187-90d9e0713790" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.284670 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.284855 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6c24q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-ckl5m_openstack-operators(c617a97c-fec4-418c-818a-250919ea6882): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.286161 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" podUID="c617a97c-fec4-418c-818a-250919ea6882" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.550464 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" podUID="c617a97c-fec4-418c-818a-250919ea6882" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.952793 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.954064 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm6b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-7z5k7_openstack-operators(224f30b2-1084-4934-8d06-67975a9776ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.955592 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" podUID="224f30b2-1084-4934-8d06-67975a9776ad" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.564116 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" podUID="224f30b2-1084-4934-8d06-67975a9776ad" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.659549 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.659814 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fcvgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-9ls2x_openstack-operators(2f8b3b48-0c03-4922-8966-a3aaca8ebce3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.662331 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" podUID="2f8b3b48-0c03-4922-8966-a3aaca8ebce3" Feb 02 10:54:55 crc kubenswrapper[4782]: E0202 10:54:55.575122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" podUID="2f8b3b48-0c03-4922-8966-a3aaca8ebce3" Feb 02 10:54:56 crc kubenswrapper[4782]: E0202 10:54:56.823850 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 02 10:54:56 crc kubenswrapper[4782]: E0202 10:54:56.824114 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6pbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-l9q78_openstack-operators(216a79cc-1b33-43f7-81ff-400a3b6f3d00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:56 crc kubenswrapper[4782]: E0202 10:54:56.825316 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" podUID="216a79cc-1b33-43f7-81ff-400a3b6f3d00" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.461468 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.461680 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c4l7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-xnzl4_openstack-operators(1661d177-41b5-4df5-886f-f3cb7abd1047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.463490 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" podUID="1661d177-41b5-4df5-886f-f3cb7abd1047" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.586013 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" podUID="1661d177-41b5-4df5-886f-f3cb7abd1047" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.586029 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" podUID="216a79cc-1b33-43f7-81ff-400a3b6f3d00" Feb 02 10:55:03 crc kubenswrapper[4782]: E0202 10:55:03.717587 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 02 10:55:03 crc kubenswrapper[4782]: E0202 10:55:03.718359 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg46x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-v8zfh_openstack-operators(ab3a96ec-3e51-4147-9a58-6596f2c3ad5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:55:03 crc kubenswrapper[4782]: E0202 10:55:03.719548 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" podUID="ab3a96ec-3e51-4147-9a58-6596f2c3ad5c" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.099845 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.100027 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6dsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-w7gld_openstack-operators(6b276ac2-533f-43c9-94a1-f0d0e4eb6993): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.101270 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" podUID="6b276ac2-533f-43c9-94a1-f0d0e4eb6993" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.632814 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" podUID="ab3a96ec-3e51-4147-9a58-6596f2c3ad5c" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.633120 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" podUID="6b276ac2-533f-43c9-94a1-f0d0e4eb6993" Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.271489 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j"] Feb 02 10:55:05 crc kubenswrapper[4782]: E0202 10:55:05.402491 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 02 10:55:05 crc kubenswrapper[4782]: E0202 10:55:05.402718 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccmjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jjztq_openstack-operators(83a0d24e-3e0c-4d9a-b735-77c74ceec664): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:55:05 crc kubenswrapper[4782]: E0202 10:55:05.404303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.637203 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" event={"ID":"009bc68d-5c70-42ca-9008-152206fd954d","Type":"ContainerStarted","Data":"da4402e37ed82a7f4a067b442c749e7a17b2e65fd5f3d52f4b496f73e00bc8d9"} Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.907959 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.984015 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf"] Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.653676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" event={"ID":"7e19a281-abaa-462e-abc7-add4acff7865","Type":"ContainerStarted","Data":"8a7669b9ffd8842827dbd0ea76d4378ea45540c7f78eedb3314951a0c3431141"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.654043 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.672412 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" event={"ID":"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7","Type":"ContainerStarted","Data":"2e4c00b2b01b9748426a51cf9f83b438e434cb1d716dd23b5d6e2a9bc73bfc74"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.672616 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.683190 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" event={"ID":"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6","Type":"ContainerStarted","Data":"07b6dc4fad32236b864b817b9a59e1d6cc14873f25477de952a1137481d824ce"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.683820 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.688983 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" event={"ID":"9ba082c6-4f91-48d6-b5ec-198f46abc135","Type":"ContainerStarted","Data":"0d5fe81c8bd5c973948077090c1b65bc4df2d529f17695ec421625ed71168cb4"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.689325 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.690364 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" event={"ID":"6c7ac81b-49d3-493d-a794-1cffe78eba5e","Type":"ContainerStarted","Data":"cbaef421110963b4f1260f599c00f469f812c9a532c8d7d7047d492ad6bc8e00"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.702046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" event={"ID":"0aa487d3-a703-4ed6-a44c-bc40eb8272ce","Type":"ContainerStarted","Data":"a9a5b4f3c50e1a091bb1452666170603f6922e4075de68696799858a65323dee"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.703004 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.712158 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" podStartSLOduration=10.451201834 podStartE2EDuration="32.71213669s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.463160536 +0000 UTC m=+956.347353252" lastFinishedPulling="2026-02-02 10:54:58.724095392 +0000 UTC m=+978.608288108" observedRunningTime="2026-02-02 10:55:06.703521673 +0000 UTC m=+986.587714399" watchObservedRunningTime="2026-02-02 10:55:06.71213669 +0000 UTC m=+986.596329406" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.722517 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" event={"ID":"3624e93f-9208-4f82-9f55-12381a637262","Type":"ContainerStarted","Data":"ec0b389260c6f87eef2659ae958ae5c0c5e264acf7f052fe4108f6b9c26b04a3"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.723331 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.724994 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" event={"ID":"0fd2f609-78f1-4f82-b405-35b5312baf0d","Type":"ContainerStarted","Data":"17f65d53be9c6b4ee41c6622bf4e3ddf8630d502bc376ae0c7e01407e6d57858"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.725533 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.730667 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" event={"ID":"bfafd643-4798-4519-934d-8ec3e2e677d9","Type":"ContainerStarted","Data":"51ab08bfb364b2577d0fd99d76329de4dc232409a7c9b636cdb2ba9b71c025e5"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.731449 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.733375 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" event={"ID":"127c9a45-7187-4afb-bb45-c34a45e67e4e","Type":"ContainerStarted","Data":"4040af90f8b61be86713c087bb897d6a2b26a0e4fc725dc9db0dc5df98d30869"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.733976 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.740063 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" event={"ID":"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27","Type":"ContainerStarted","Data":"838bb51d90701c23340bc34b35aa3491f5dc59f16bb005b171eaa12389abe82a"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.740471 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.755943 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" exitCode=0 Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.755989 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.756013 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerStarted","Data":"b35a4e0e6150963a20819d29e6e20270f93008d6cf7aa812ef8e1c21fd13b16f"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.758927 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podStartSLOduration=4.133216635 podStartE2EDuration="32.758906138s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.859505947 +0000 UTC m=+956.743698663" lastFinishedPulling="2026-02-02 10:55:05.48519545 +0000 UTC m=+985.369388166" observedRunningTime="2026-02-02 10:55:06.753750221 +0000 UTC m=+986.637942957" watchObservedRunningTime="2026-02-02 10:55:06.758906138 +0000 UTC m=+986.643098854" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.812326 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" podStartSLOduration=4.833076669 podStartE2EDuration="33.812309607s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.53350974 +0000 UTC m=+956.417702456" lastFinishedPulling="2026-02-02 10:55:05.512742678 +0000 UTC m=+985.396935394" observedRunningTime="2026-02-02 10:55:06.807833958 +0000 UTC m=+986.692026674" watchObservedRunningTime="2026-02-02 10:55:06.812309607 +0000 UTC m=+986.696502323" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.866667 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" podStartSLOduration=11.233119495 podStartE2EDuration="33.866622481s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.088251439 +0000 UTC m=+955.972444155" lastFinishedPulling="2026-02-02 10:54:58.721754425 +0000 UTC m=+978.605947141" observedRunningTime="2026-02-02 10:55:06.861092263 +0000 UTC m=+986.745284979" watchObservedRunningTime="2026-02-02 10:55:06.866622481 +0000 UTC m=+986.750815197" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.941059 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" podStartSLOduration=9.762214005 podStartE2EDuration="33.94103777s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:35.590369233 +0000 UTC m=+955.474561949" lastFinishedPulling="2026-02-02 10:54:59.769192998 +0000 UTC m=+979.653385714" observedRunningTime="2026-02-02 10:55:06.927840043 +0000 UTC m=+986.812032759" watchObservedRunningTime="2026-02-02 10:55:06.94103777 +0000 UTC m=+986.825230486" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.023448 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podStartSLOduration=4.399355132 podStartE2EDuration="33.023427208s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.849743108 +0000 UTC m=+956.733935824" lastFinishedPulling="2026-02-02 10:55:05.473815184 +0000 UTC m=+985.358007900" observedRunningTime="2026-02-02 10:55:07.021016339 +0000 UTC m=+986.905209075" watchObservedRunningTime="2026-02-02 10:55:07.023427208 +0000 UTC m=+986.907619924" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.053012 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" podStartSLOduration=12.044230127 podStartE2EDuration="34.052991034s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.061741841 +0000 UTC m=+955.945934567" lastFinishedPulling="2026-02-02 10:54:58.070502758 +0000 UTC m=+977.954695474" observedRunningTime="2026-02-02 10:55:07.046432246 +0000 UTC m=+986.930624962" watchObservedRunningTime="2026-02-02 10:55:07.052991034 +0000 UTC m=+986.937183760" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.151306 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" podStartSLOduration=10.961613231 podStartE2EDuration="33.151288697s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.532228753 +0000 UTC m=+956.416421469" lastFinishedPulling="2026-02-02 10:54:58.721904219 +0000 UTC m=+978.606096935" observedRunningTime="2026-02-02 10:55:07.140843008 +0000 UTC m=+987.025035744" watchObservedRunningTime="2026-02-02 10:55:07.151288697 +0000 UTC m=+987.035481413" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.233903 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" podStartSLOduration=12.488276756 podStartE2EDuration="34.233882901s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:35.732798249 +0000 UTC m=+955.616990965" lastFinishedPulling="2026-02-02 10:54:57.478404394 +0000 UTC m=+977.362597110" observedRunningTime="2026-02-02 10:55:07.210748638 +0000 UTC m=+987.094941354" watchObservedRunningTime="2026-02-02 10:55:07.233882901 +0000 UTC m=+987.118075627" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.250183 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podStartSLOduration=4.498170339 podStartE2EDuration="33.250164686s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.871000406 +0000 UTC m=+956.755193122" lastFinishedPulling="2026-02-02 10:55:05.622994753 +0000 UTC m=+985.507187469" observedRunningTime="2026-02-02 10:55:07.231340878 +0000 UTC m=+987.115533594" watchObservedRunningTime="2026-02-02 10:55:07.250164686 +0000 UTC m=+987.134357402" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.452051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.452117 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.461192 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.461775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.532262 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mbtgq" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.539530 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.050303 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp"] Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.753961 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.756034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.772760 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.809889 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" event={"ID":"5844bcff-6d6e-4cf4-89af-dfecfc748869","Type":"ContainerStarted","Data":"a1a9b1299ba024edbd925b8c7a33b3032edca71552d93e831208ed8cf858eac0"} Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.809959 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" event={"ID":"5844bcff-6d6e-4cf4-89af-dfecfc748869","Type":"ContainerStarted","Data":"ea2c828cebf1dd80e8979e92dd2323e06bfe516f2e6a7a50b9b6a02581784071"} Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.854769 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" podStartSLOduration=33.854752474 podStartE2EDuration="33.854752474s" podCreationTimestamp="2026-02-02 10:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:08.847872787 +0000 UTC m=+988.732065503" watchObservedRunningTime="2026-02-02 10:55:08.854752474 +0000 UTC m=+988.738945190" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.900890 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.901206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.901331 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.002994 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.003080 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.003105 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.003696 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.004295 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.039005 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.085117 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.820977 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:10 crc kubenswrapper[4782]: I0202 10:55:10.041403 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:10 crc kubenswrapper[4782]: I0202 10:55:10.832506 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerStarted","Data":"5e81cbef9e833505fd0df87090b3b8a5e200225b581ca1d4a7afe27bab6d1427"} Feb 02 10:55:12 crc kubenswrapper[4782]: I0202 10:55:12.848216 4782 generic.go:334] "Generic (PLEG): container finished" podID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" exitCode=0 Feb 02 10:55:12 crc kubenswrapper[4782]: I0202 10:55:12.849761 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.352049 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.354037 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.376077 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.470609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.470676 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.470717 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.575784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.575900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.575919 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.576317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.576530 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.609806 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.672113 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.855727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" event={"ID":"2f8b3b48-0c03-4922-8966-a3aaca8ebce3","Type":"ContainerStarted","Data":"aafeb6a1d514a10efaf35e2bfda7c93b6512861484627867db3b0474793e9aaf"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.857129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.858149 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" event={"ID":"b03fe987-deab-47e7-829a-b822ab061f20","Type":"ContainerStarted","Data":"d402c0fcdb5d4fa8e4dca5378f965f20361ae89497cd488fc548e2f108a3585a"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.858584 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.860782 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerStarted","Data":"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.861967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" event={"ID":"c617a97c-fec4-418c-818a-250919ea6882","Type":"ContainerStarted","Data":"cdfcc0e5dcfdf02da12d2b38d9f57eb986f12e226763a3c9099d4c4a2b414c96"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.862327 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.863322 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" event={"ID":"6c7ac81b-49d3-493d-a794-1cffe78eba5e","Type":"ContainerStarted","Data":"0f95d86f74b98e4dbc2f711bf78734eb3290e39ed3c8dc88ebaa0bcbaf33fad6"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.863921 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.866424 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" event={"ID":"f44c1b55-d189-42dd-9187-90d9e0713790","Type":"ContainerStarted","Data":"246222310d651b966169e79e1962a2400c59d0f7238a0e1776396a2c08a48953"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.867099 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.868398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" event={"ID":"009bc68d-5c70-42ca-9008-152206fd954d","Type":"ContainerStarted","Data":"a782ded10f4830efde9ad7a0b9c882ac4b232641312411ed441628ab387c01b0"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.868620 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.869779 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" event={"ID":"216a79cc-1b33-43f7-81ff-400a3b6f3d00","Type":"ContainerStarted","Data":"7316b36ec7b686d7c8bd21502a880eee1fb05f6cd26fb1f4483bd777a8ad092e"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.870090 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.870926 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" event={"ID":"224f30b2-1084-4934-8d06-67975a9776ad","Type":"ContainerStarted","Data":"6a7e603b5edb39378311215eadde71b35da21649e0a777a39515836c561a30ba"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.871165 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.872213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" event={"ID":"1661d177-41b5-4df5-886f-f3cb7abd1047","Type":"ContainerStarted","Data":"6d5e8ad129b2545b99fc7962dad50fbb4d1b509a7b94a776b6be4124cd84eafd"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.872367 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.922517 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" podStartSLOduration=33.34846726 podStartE2EDuration="39.922498444s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:55:06.001254898 +0000 UTC m=+985.885447614" lastFinishedPulling="2026-02-02 10:55:12.575286082 +0000 UTC m=+992.459478798" observedRunningTime="2026-02-02 10:55:13.918058617 +0000 UTC m=+993.802251333" watchObservedRunningTime="2026-02-02 10:55:13.922498444 +0000 UTC m=+993.806691160" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.926477 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" podStartSLOduration=3.961789791 podStartE2EDuration="39.926455867s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.450466643 +0000 UTC m=+956.334659359" lastFinishedPulling="2026-02-02 10:55:12.415132719 +0000 UTC m=+992.299325435" observedRunningTime="2026-02-02 10:55:13.888006797 +0000 UTC m=+993.772199513" watchObservedRunningTime="2026-02-02 10:55:13.926455867 +0000 UTC m=+993.810648583" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.939126 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" podStartSLOduration=4.187073986 podStartE2EDuration="39.939105339s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.804447801 +0000 UTC m=+956.688640527" lastFinishedPulling="2026-02-02 10:55:12.556479164 +0000 UTC m=+992.440671880" observedRunningTime="2026-02-02 10:55:13.93773109 +0000 UTC m=+993.821923806" watchObservedRunningTime="2026-02-02 10:55:13.939105339 +0000 UTC m=+993.823298055" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.960023 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" podStartSLOduration=34.051516251 podStartE2EDuration="40.960005207s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:55:05.473877706 +0000 UTC m=+985.358070422" lastFinishedPulling="2026-02-02 10:55:12.382366662 +0000 UTC m=+992.266559378" observedRunningTime="2026-02-02 10:55:13.956129706 +0000 UTC m=+993.840322422" watchObservedRunningTime="2026-02-02 10:55:13.960005207 +0000 UTC m=+993.844197923" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.015882 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" podStartSLOduration=3.9939968329999997 podStartE2EDuration="40.015864346s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.532500021 +0000 UTC m=+956.416692737" lastFinishedPulling="2026-02-02 10:55:12.554367534 +0000 UTC m=+992.438560250" observedRunningTime="2026-02-02 10:55:13.984071146 +0000 UTC m=+993.868263862" watchObservedRunningTime="2026-02-02 10:55:14.015864346 +0000 UTC m=+993.900057062" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.044933 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" podStartSLOduration=4.317380235 podStartE2EDuration="40.044916157s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.829069626 +0000 UTC m=+956.713262342" lastFinishedPulling="2026-02-02 10:55:12.556605548 +0000 UTC m=+992.440798264" observedRunningTime="2026-02-02 10:55:14.043397094 +0000 UTC m=+993.927589810" watchObservedRunningTime="2026-02-02 10:55:14.044916157 +0000 UTC m=+993.929108873" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.062386 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" podStartSLOduration=5.168211729 podStartE2EDuration="41.062367247s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.51988325 +0000 UTC m=+956.404075966" lastFinishedPulling="2026-02-02 10:55:12.414038768 +0000 UTC m=+992.298231484" observedRunningTime="2026-02-02 10:55:14.060019899 +0000 UTC m=+993.944212615" watchObservedRunningTime="2026-02-02 10:55:14.062367247 +0000 UTC m=+993.946559963" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.098495 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" podStartSLOduration=4.502553474 podStartE2EDuration="40.09847626s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.816664581 +0000 UTC m=+956.700857297" lastFinishedPulling="2026-02-02 10:55:12.412587367 +0000 UTC m=+992.296780083" observedRunningTime="2026-02-02 10:55:14.096213055 +0000 UTC m=+993.980405771" watchObservedRunningTime="2026-02-02 10:55:14.09847626 +0000 UTC m=+993.982668976" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.123146 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" podStartSLOduration=4.47980082 podStartE2EDuration="41.123132925s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:35.81812336 +0000 UTC m=+955.702316076" lastFinishedPulling="2026-02-02 10:55:12.461455465 +0000 UTC m=+992.345648181" observedRunningTime="2026-02-02 10:55:14.120258503 +0000 UTC m=+994.004451219" watchObservedRunningTime="2026-02-02 10:55:14.123132925 +0000 UTC m=+994.007325641" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.124838 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.136266 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.149287 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.235736 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.324715 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.602148 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.879044 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" exitCode=0 Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.880196 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7"} Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.907370 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:55:15 crc kubenswrapper[4782]: I0202 10:55:15.137242 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:55:15 crc kubenswrapper[4782]: I0202 10:55:15.221711 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:55:15 crc kubenswrapper[4782]: I0202 10:55:15.245480 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.147665 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.766223 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.893484 4782 generic.go:334] "Generic (PLEG): container finished" podID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" exitCode=0 Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.893569 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6"} Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.897256 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerStarted","Data":"291c7cc1034e1388ada25b16a9b90b3e30e39b678bc61c7573b4a92f1ad048e6"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.550351 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.904084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" event={"ID":"6b276ac2-533f-43c9-94a1-f0d0e4eb6993","Type":"ContainerStarted","Data":"30326654e96f0a152cccbe55069fde8e58735bab6ad9e408b52328334f00bcd1"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.904706 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.907016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerStarted","Data":"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.909974 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerStarted","Data":"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.911565 4782 generic.go:334] "Generic (PLEG): container finished" podID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" exitCode=0 Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.911595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.923783 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" podStartSLOduration=4.028499805 podStartE2EDuration="44.923767526s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.509834842 +0000 UTC m=+956.394027558" lastFinishedPulling="2026-02-02 10:55:17.405102563 +0000 UTC m=+997.289295279" observedRunningTime="2026-02-02 10:55:17.921306715 +0000 UTC m=+997.805499431" watchObservedRunningTime="2026-02-02 10:55:17.923767526 +0000 UTC m=+997.807960242" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.972424 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6x8zf" podStartSLOduration=5.41910157 podStartE2EDuration="9.972402468s" podCreationTimestamp="2026-02-02 10:55:08 +0000 UTC" firstStartedPulling="2026-02-02 10:55:12.85196022 +0000 UTC m=+992.736152936" lastFinishedPulling="2026-02-02 10:55:17.405261118 +0000 UTC m=+997.289453834" observedRunningTime="2026-02-02 10:55:17.964870322 +0000 UTC m=+997.849063058" watchObservedRunningTime="2026-02-02 10:55:17.972402468 +0000 UTC m=+997.856595184" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.986194 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xbp4r" podStartSLOduration=20.545424826 podStartE2EDuration="30.986175582s" podCreationTimestamp="2026-02-02 10:54:47 +0000 UTC" firstStartedPulling="2026-02-02 10:55:06.761557774 +0000 UTC m=+986.645750490" lastFinishedPulling="2026-02-02 10:55:17.20230853 +0000 UTC m=+997.086501246" observedRunningTime="2026-02-02 10:55:17.984602427 +0000 UTC m=+997.868795153" watchObservedRunningTime="2026-02-02 10:55:17.986175582 +0000 UTC m=+997.870368298" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.167875 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.168133 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:18 crc kubenswrapper[4782]: E0202 10:55:18.827161 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.928775 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" event={"ID":"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c","Type":"ContainerStarted","Data":"da1767a8c52f1fefb3e588727a746e8770186442bc68a394e48ca0c99c63fc1a"} Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.929758 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.933885 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerStarted","Data":"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34"} Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.954920 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" podStartSLOduration=2.889574839 podStartE2EDuration="44.954893673s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.548157029 +0000 UTC m=+956.432349755" lastFinishedPulling="2026-02-02 10:55:18.613475873 +0000 UTC m=+998.497668589" observedRunningTime="2026-02-02 10:55:18.947743138 +0000 UTC m=+998.831935864" watchObservedRunningTime="2026-02-02 10:55:18.954893673 +0000 UTC m=+998.839086389" Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.086537 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.086586 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.213983 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xbp4r" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" probeResult="failure" output=< Feb 02 10:55:19 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:55:19 crc kubenswrapper[4782]: > Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.941663 4782 generic.go:334] "Generic (PLEG): container finished" podID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" exitCode=0 Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.941717 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34"} Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.131908 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6x8zf" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" probeResult="failure" output=< Feb 02 10:55:20 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:55:20 crc kubenswrapper[4782]: > Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.140656 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.790004 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.950020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerStarted","Data":"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524"} Feb 02 10:55:22 crc kubenswrapper[4782]: I0202 10:55:22.951056 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:55:22 crc kubenswrapper[4782]: I0202 10:55:22.951496 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:55:23 crc kubenswrapper[4782]: I0202 10:55:23.673024 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:23 crc kubenswrapper[4782]: I0202 10:55:23.673077 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.184129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.209195 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zphk7" podStartSLOduration=8.651528151 podStartE2EDuration="11.20915532s" podCreationTimestamp="2026-02-02 10:55:13 +0000 UTC" firstStartedPulling="2026-02-02 10:55:17.912934336 +0000 UTC m=+997.797127052" lastFinishedPulling="2026-02-02 10:55:20.470561505 +0000 UTC m=+1000.354754221" observedRunningTime="2026-02-02 10:55:20.976439542 +0000 UTC m=+1000.860632258" watchObservedRunningTime="2026-02-02 10:55:24.20915532 +0000 UTC m=+1004.093348036" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.554751 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.564780 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.633583 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.717887 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zphk7" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" probeResult="failure" output=< Feb 02 10:55:24 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:55:24 crc kubenswrapper[4782]: > Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.803736 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.832199 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:55:25 crc kubenswrapper[4782]: I0202 10:55:25.145057 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:55:25 crc kubenswrapper[4782]: I0202 10:55:25.157354 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:55:25 crc kubenswrapper[4782]: I0202 10:55:25.186139 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:55:28 crc kubenswrapper[4782]: I0202 10:55:28.210067 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:28 crc kubenswrapper[4782]: I0202 10:55:28.267580 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:28 crc kubenswrapper[4782]: I0202 10:55:28.450555 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:29 crc kubenswrapper[4782]: I0202 10:55:29.128268 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:29 crc kubenswrapper[4782]: I0202 10:55:29.189348 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.005179 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xbp4r" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" containerID="cri-o://a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" gracePeriod=2 Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.412845 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.615762 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.615933 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.616034 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.616607 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities" (OuterVolumeSpecName: "utilities") pod "8fb4828a-ffeb-41d4-8410-c4ea114e7e61" (UID: "8fb4828a-ffeb-41d4-8410-c4ea114e7e61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.627085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657" (OuterVolumeSpecName: "kube-api-access-gm657") pod "8fb4828a-ffeb-41d4-8410-c4ea114e7e61" (UID: "8fb4828a-ffeb-41d4-8410-c4ea114e7e61"). InnerVolumeSpecName "kube-api-access-gm657". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.674901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fb4828a-ffeb-41d4-8410-c4ea114e7e61" (UID: "8fb4828a-ffeb-41d4-8410-c4ea114e7e61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.717369 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.717406 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.717420 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.014189 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" exitCode=0 Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.014245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df"} Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.015577 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"b35a4e0e6150963a20819d29e6e20270f93008d6cf7aa812ef8e1c21fd13b16f"} Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.015671 4782 scope.go:117] "RemoveContainer" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.014273 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.020817 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" event={"ID":"83a0d24e-3e0c-4d9a-b735-77c74ceec664","Type":"ContainerStarted","Data":"cdd1d0f2dafd6328a0c712865c17dfde7ee7c7b0efc30bff0931cf120f095178"} Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.038031 4782 scope.go:117] "RemoveContainer" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.042819 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.048279 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.056801 4782 scope.go:117] "RemoveContainer" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.072905 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podStartSLOduration=2.635877235 podStartE2EDuration="56.072872086s" podCreationTimestamp="2026-02-02 10:54:35 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.86868993 +0000 UTC m=+956.752882646" lastFinishedPulling="2026-02-02 10:55:30.305684771 +0000 UTC m=+1010.189877497" observedRunningTime="2026-02-02 10:55:31.068356466 +0000 UTC m=+1010.952549182" watchObservedRunningTime="2026-02-02 10:55:31.072872086 +0000 UTC m=+1010.957064802" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.096590 4782 scope.go:117] "RemoveContainer" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" Feb 02 10:55:31 crc kubenswrapper[4782]: E0202 10:55:31.097540 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df\": container with ID starting with a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df not found: ID does not exist" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.097601 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df"} err="failed to get container status \"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df\": rpc error: code = NotFound desc = could not find container \"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df\": container with ID starting with a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df not found: ID does not exist" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.097656 4782 scope.go:117] "RemoveContainer" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" Feb 02 10:55:31 crc kubenswrapper[4782]: E0202 10:55:31.098134 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7\": container with ID starting with 40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7 not found: ID does not exist" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.098170 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7"} err="failed to get container status \"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7\": rpc error: code = NotFound desc = could not find container \"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7\": container with ID starting with 40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7 not found: ID does not exist" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.098192 4782 scope.go:117] "RemoveContainer" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" Feb 02 10:55:31 crc kubenswrapper[4782]: E0202 10:55:31.098447 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444\": container with ID starting with 88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444 not found: ID does not exist" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.098479 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444"} err="failed to get container status \"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444\": rpc error: code = NotFound desc = could not find container \"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444\": container with ID starting with 88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444 not found: ID does not exist" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.249170 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.249390 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6x8zf" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" containerID="cri-o://cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" gracePeriod=2 Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.651796 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830082 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830226 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830259 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830956 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities" (OuterVolumeSpecName: "utilities") pod "5527d0d6-41e7-42f6-bcb8-65dccddacbd4" (UID: "5527d0d6-41e7-42f6-bcb8-65dccddacbd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.838922 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk" (OuterVolumeSpecName: "kube-api-access-xdxhk") pod "5527d0d6-41e7-42f6-bcb8-65dccddacbd4" (UID: "5527d0d6-41e7-42f6-bcb8-65dccddacbd4"). InnerVolumeSpecName "kube-api-access-xdxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.877084 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5527d0d6-41e7-42f6-bcb8-65dccddacbd4" (UID: "5527d0d6-41e7-42f6-bcb8-65dccddacbd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.932998 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.933049 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.933065 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029225 4782 generic.go:334] "Generic (PLEG): container finished" podID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" exitCode=0 Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029270 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754"} Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029279 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029294 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"5e81cbef9e833505fd0df87090b3b8a5e200225b581ca1d4a7afe27bab6d1427"} Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029311 4782 scope.go:117] "RemoveContainer" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.049819 4782 scope.go:117] "RemoveContainer" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.059594 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.066450 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.072420 4782 scope.go:117] "RemoveContainer" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.092356 4782 scope.go:117] "RemoveContainer" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" Feb 02 10:55:32 crc kubenswrapper[4782]: E0202 10:55:32.093136 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754\": container with ID starting with cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754 not found: ID does not exist" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093173 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754"} err="failed to get container status \"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754\": rpc error: code = NotFound desc = could not find container \"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754\": container with ID starting with cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754 not found: ID does not exist" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093202 4782 scope.go:117] "RemoveContainer" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" Feb 02 10:55:32 crc kubenswrapper[4782]: E0202 10:55:32.093674 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6\": container with ID starting with 101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6 not found: ID does not exist" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093707 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6"} err="failed to get container status \"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6\": rpc error: code = NotFound desc = could not find container \"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6\": container with ID starting with 101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6 not found: ID does not exist" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093725 4782 scope.go:117] "RemoveContainer" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" Feb 02 10:55:32 crc kubenswrapper[4782]: E0202 10:55:32.095810 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f\": container with ID starting with 02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f not found: ID does not exist" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.095868 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f"} err="failed to get container status \"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f\": rpc error: code = NotFound desc = could not find container \"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f\": container with ID starting with 02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f not found: ID does not exist" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.843073 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" path="/var/lib/kubelet/pods/5527d0d6-41e7-42f6-bcb8-65dccddacbd4/volumes" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.844999 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" path="/var/lib/kubelet/pods/8fb4828a-ffeb-41d4-8410-c4ea114e7e61/volumes" Feb 02 10:55:33 crc kubenswrapper[4782]: I0202 10:55:33.719173 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:33 crc kubenswrapper[4782]: I0202 10:55:33.764815 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.051785 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.052048 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zphk7" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" containerID="cri-o://778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" gracePeriod=2 Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.432818 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.594293 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"4d43af81-0992-412f-8847-e3c97ab9c5ec\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.594349 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"4d43af81-0992-412f-8847-e3c97ab9c5ec\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.594392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"4d43af81-0992-412f-8847-e3c97ab9c5ec\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.595213 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities" (OuterVolumeSpecName: "utilities") pod "4d43af81-0992-412f-8847-e3c97ab9c5ec" (UID: "4d43af81-0992-412f-8847-e3c97ab9c5ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.604905 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694" (OuterVolumeSpecName: "kube-api-access-b7694") pod "4d43af81-0992-412f-8847-e3c97ab9c5ec" (UID: "4d43af81-0992-412f-8847-e3c97ab9c5ec"). InnerVolumeSpecName "kube-api-access-b7694". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.624687 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d43af81-0992-412f-8847-e3c97ab9c5ec" (UID: "4d43af81-0992-412f-8847-e3c97ab9c5ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.695627 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.695693 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.695707 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066317 4782 generic.go:334] "Generic (PLEG): container finished" podID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" exitCode=0 Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066413 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066426 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524"} Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066880 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"291c7cc1034e1388ada25b16a9b90b3e30e39b678bc61c7573b4a92f1ad048e6"} Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066900 4782 scope.go:117] "RemoveContainer" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.088167 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.098191 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.104471 4782 scope.go:117] "RemoveContainer" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.120206 4782 scope.go:117] "RemoveContainer" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.152356 4782 scope.go:117] "RemoveContainer" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" Feb 02 10:55:37 crc kubenswrapper[4782]: E0202 10:55:37.152873 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524\": container with ID starting with 778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524 not found: ID does not exist" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.152911 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524"} err="failed to get container status \"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524\": rpc error: code = NotFound desc = could not find container \"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524\": container with ID starting with 778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524 not found: ID does not exist" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.152940 4782 scope.go:117] "RemoveContainer" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" Feb 02 10:55:37 crc kubenswrapper[4782]: E0202 10:55:37.153179 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34\": container with ID starting with 49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34 not found: ID does not exist" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.153219 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34"} err="failed to get container status \"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34\": rpc error: code = NotFound desc = could not find container \"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34\": container with ID starting with 49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34 not found: ID does not exist" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.153232 4782 scope.go:117] "RemoveContainer" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" Feb 02 10:55:37 crc kubenswrapper[4782]: E0202 10:55:37.153616 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71\": container with ID starting with c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71 not found: ID does not exist" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.153695 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71"} err="failed to get container status \"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71\": rpc error: code = NotFound desc = could not find container \"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71\": container with ID starting with c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71 not found: ID does not exist" Feb 02 10:55:38 crc kubenswrapper[4782]: I0202 10:55:38.828351 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" path="/var/lib/kubelet/pods/4d43af81-0992-412f-8847-e3c97ab9c5ec/volumes" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.299940 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300618 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300630 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300658 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300683 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300695 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300701 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300710 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300717 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300730 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300736 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300743 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300749 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300759 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300764 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300773 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300779 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300787 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300793 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300922 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300933 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300941 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.301586 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307418 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307657 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307432 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307860 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lpsfl" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.331555 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.411742 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.415982 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.421354 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.424778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.424860 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.473184 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526025 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526072 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526137 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526161 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526186 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.527725 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.556363 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.623813 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.627332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.627415 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.627443 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.628819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.629484 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.660931 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.755211 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:47 crc kubenswrapper[4782]: I0202 10:55:47.309079 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:47 crc kubenswrapper[4782]: W0202 10:55:47.314335 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d246705_dc07_488a_9288_59e2a16174fe.slice/crio-412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a WatchSource:0}: Error finding container 412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a: Status 404 returned error can't find the container with id 412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a Feb 02 10:55:47 crc kubenswrapper[4782]: W0202 10:55:47.372207 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod651c76ea_95cf_4ed1_80da_6731a9bcb98a.slice/crio-2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2 WatchSource:0}: Error finding container 2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2: Status 404 returned error can't find the container with id 2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2 Feb 02 10:55:47 crc kubenswrapper[4782]: I0202 10:55:47.372694 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:48 crc kubenswrapper[4782]: I0202 10:55:48.144758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" event={"ID":"0d246705-dc07-488a-9288-59e2a16174fe","Type":"ContainerStarted","Data":"412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a"} Feb 02 10:55:48 crc kubenswrapper[4782]: I0202 10:55:48.150733 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" event={"ID":"651c76ea-95cf-4ed1-80da-6731a9bcb98a","Type":"ContainerStarted","Data":"2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2"} Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.141515 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.174660 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.175996 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.197635 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.376920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.379271 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.379372 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.480263 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.480410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.480430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.481198 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.482020 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.523720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.571284 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.625630 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.629828 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.654543 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.805650 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.806007 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.806095 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.806125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.907916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.907995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.908026 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.909801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.910604 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.945188 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.955014 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.350672 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.352170 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.361801 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362021 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362046 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362120 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l8s6k" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362279 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362515 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362682 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362817 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.506219 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519409 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519458 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519587 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519655 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.520394 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.520432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.520498 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622114 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622739 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622786 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622848 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622908 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622936 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622957 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.623018 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.623040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.623390 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622699 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.624420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.624853 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.625963 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.628317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.630793 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.631319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.631955 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.641371 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.646071 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.667968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.694595 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.700558 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.770821 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.776034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.780798 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781050 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781117 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781197 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781237 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781470 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781528 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fsk8v" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.821682 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929580 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929624 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929717 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929757 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929781 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929807 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929833 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929853 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.031907 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032189 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032224 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032273 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032306 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032328 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032399 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032444 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.033278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.033616 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.033838 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.034496 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.037139 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.037431 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.042446 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.047568 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.051224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.059173 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.063068 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.089217 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.104715 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.187086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" event={"ID":"76e79a91-7593-4b7a-bb1a-6396209cc424","Type":"ContainerStarted","Data":"bf6952e060684b89e023f4574112773aed8ccdabbd164bcea1f68ba05b888020"} Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.188076 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" event={"ID":"9f871e0b-e0d8-43a7-a251-9601cfcfd87a","Type":"ContainerStarted","Data":"b423e53936fb052435d6af130cac71bf078bb138f7f10831bf50eccca562a831"} Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.322430 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:55:51 crc kubenswrapper[4782]: W0202 10:55:51.703057 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode326d5b8_cced_4bdd_858a_3d5b7f8dd2d9.slice/crio-f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43 WatchSource:0}: Error finding container f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43: Status 404 returned error can't find the container with id f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43 Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.078072 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.080151 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.083928 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.085443 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.090929 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-wbrrs" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.091992 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.095421 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.097884 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.203237 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.244319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerStarted","Data":"f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43"} Feb 02 10:55:52 crc kubenswrapper[4782]: W0202 10:55:52.254522 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02fc338c_2f8c_4e17_8d5f_7a919f4237a2.slice/crio-541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f WatchSource:0}: Error finding container 541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f: Status 404 returned error can't find the container with id 541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phf5c\" (UniqueName: \"kubernetes.io/projected/827c472d-1762-4e1c-a096-2d48ca9af689-kube-api-access-phf5c\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267876 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267914 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-operator-scripts\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267944 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-default\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273258 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273388 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273430 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-kolla-config\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273471 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-generated\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.376884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-kolla-config\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377284 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-generated\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phf5c\" (UniqueName: \"kubernetes.io/projected/827c472d-1762-4e1c-a096-2d48ca9af689-kube-api-access-phf5c\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377732 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377770 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-operator-scripts\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377797 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-default\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.384513 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.392302 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.394401 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-operator-scripts\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.395167 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-default\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.395416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.395789 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-kolla-config\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.396027 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-generated\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.412492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phf5c\" (UniqueName: \"kubernetes.io/projected/827c472d-1762-4e1c-a096-2d48ca9af689-kube-api-access-phf5c\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.465788 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.726711 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.951214 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.951635 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.264918 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerStarted","Data":"541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f"} Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.521324 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.522988 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528443 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528766 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528805 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xn4sf" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528812 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.558600 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700811 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700866 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700943 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwtw\" (UniqueName: \"kubernetes.io/projected/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kube-api-access-fmwtw\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700974 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701014 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701074 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.734717 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.735679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.741849 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.742067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.742173 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rlffs" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.762459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803450 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwtw\" (UniqueName: \"kubernetes.io/projected/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kube-api-access-fmwtw\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803494 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803551 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803569 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803590 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803623 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803677 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.805744 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.806152 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.806535 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.808336 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.810164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.812467 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.827038 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.866075 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.868962 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwtw\" (UniqueName: \"kubernetes.io/projected/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kube-api-access-fmwtw\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-config-data\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904619 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904723 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904800 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-kolla-config\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904845 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tc8d\" (UniqueName: \"kubernetes.io/projected/17f9dd31-25b9-4b3f-82a6-12096f36308a-kube-api-access-6tc8d\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.033835 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-config-data\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034898 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034958 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-kolla-config\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034985 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tc8d\" (UniqueName: \"kubernetes.io/projected/17f9dd31-25b9-4b3f-82a6-12096f36308a-kube-api-access-6tc8d\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.037960 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-config-data\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.038452 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-kolla-config\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.058529 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tc8d\" (UniqueName: \"kubernetes.io/projected/17f9dd31-25b9-4b3f-82a6-12096f36308a-kube-api-access-6tc8d\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.067030 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.075415 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.075985 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.154414 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.486290 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.487415 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.491678 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-krnfc" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.500282 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.679342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"kube-state-metrics-0\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.782419 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"kube-state-metrics-0\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.829008 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"kube-state-metrics-0\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:56 crc kubenswrapper[4782]: I0202 10:55:56.118416 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.791785 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.794087 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.797485 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2pvnb" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.797745 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.799303 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.799778 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.800027 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.810192 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955552 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-config\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955622 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955678 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955773 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955810 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgv9\" (UniqueName: \"kubernetes.io/projected/572fc7c8-9560-43d0-ba3e-d3f098494878-kube-api-access-wxgv9\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056705 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056764 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056786 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgv9\" (UniqueName: \"kubernetes.io/projected/572fc7c8-9560-43d0-ba3e-d3f098494878-kube-api-access-wxgv9\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056850 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-config\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056946 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.057313 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.058092 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.058538 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-config\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.058694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.065518 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.066354 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.079007 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.079563 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgv9\" (UniqueName: \"kubernetes.io/projected/572fc7c8-9560-43d0-ba3e-d3f098494878-kube-api-access-wxgv9\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.084743 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.122851 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.624014 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sv8l5"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.625136 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.629242 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rjr46" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.629486 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.629666 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.645578 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zs65k"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.659069 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.659124 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.672470 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zs65k"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765004 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbh5r\" (UniqueName: \"kubernetes.io/projected/e91c0f3d-db81-453d-ad0e-30aeadb66206-kube-api-access-cbh5r\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765059 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-log\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-combined-ca-bundle\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765144 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-etc-ovs\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765202 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-run\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-ovn-controller-tls-certs\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e91c0f3d-db81-453d-ad0e-30aeadb66206-scripts\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765255 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpr5j\" (UniqueName: \"kubernetes.io/projected/b009ca1c-fc93-4724-9275-c44039256469-kube-api-access-rpr5j\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765276 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765292 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b009ca1c-fc93-4724-9275-c44039256469-scripts\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765327 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-log-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765341 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-lib\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.866618 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-log-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.867982 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-lib\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868206 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbh5r\" (UniqueName: \"kubernetes.io/projected/e91c0f3d-db81-453d-ad0e-30aeadb66206-kube-api-access-cbh5r\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868356 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-log\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872417 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-combined-ca-bundle\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872657 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-etc-ovs\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-run\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-ovn-controller-tls-certs\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e91c0f3d-db81-453d-ad0e-30aeadb66206-scripts\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873348 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpr5j\" (UniqueName: \"kubernetes.io/projected/b009ca1c-fc93-4724-9275-c44039256469-kube-api-access-rpr5j\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873468 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873590 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b009ca1c-fc93-4724-9275-c44039256469-scripts\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.874751 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868448 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-lib\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.876660 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-etc-ovs\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-log\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.876775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-run\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.877547 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.878836 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e91c0f3d-db81-453d-ad0e-30aeadb66206-scripts\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-log-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.881883 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b009ca1c-fc93-4724-9275-c44039256469-scripts\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.890407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-combined-ca-bundle\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.898546 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbh5r\" (UniqueName: \"kubernetes.io/projected/e91c0f3d-db81-453d-ad0e-30aeadb66206-kube-api-access-cbh5r\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.901251 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-ovn-controller-tls-certs\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.920781 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpr5j\" (UniqueName: \"kubernetes.io/projected/b009ca1c-fc93-4724-9275-c44039256469-kube-api-access-rpr5j\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.958480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:01 crc kubenswrapper[4782]: I0202 10:56:01.004821 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.406856 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.409098 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411102 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411274 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411555 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cxk52" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411776 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.422944 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502659 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502735 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502876 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502901 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502948 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5m45\" (UniqueName: \"kubernetes.io/projected/d8169f65-2d63-4127-8d23-ba6d56af1156-kube-api-access-j5m45\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604124 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5m45\" (UniqueName: \"kubernetes.io/projected/d8169f65-2d63-4127-8d23-ba6d56af1156-kube-api-access-j5m45\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604239 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604275 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604361 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604384 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604406 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604847 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.605680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.606555 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.607053 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.611520 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.626443 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.626985 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.634194 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5m45\" (UniqueName: \"kubernetes.io/projected/d8169f65-2d63-4127-8d23-ba6d56af1156-kube-api-access-j5m45\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.638935 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.742272 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.906036 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.906924 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cc844,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-7jkmx_openstack(76e79a91-7593-4b7a-bb1a-6396209cc424): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.908762 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" podUID="76e79a91-7593-4b7a-bb1a-6396209cc424" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.975870 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.976327 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-s8sfp_openstack(9f871e0b-e0d8-43a7-a251-9601cfcfd87a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.977707 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" podUID="9f871e0b-e0d8-43a7-a251-9601cfcfd87a" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.024248 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.024402 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbbjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-v2zgx_openstack(0d246705-dc07-488a-9288-59e2a16174fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.025302 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.025382 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nwmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-76smw_openstack(651c76ea-95cf-4ed1-80da-6731a9bcb98a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.025465 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" podUID="0d246705-dc07-488a-9288-59e2a16174fe" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.028181 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" podUID="651c76ea-95cf-4ed1-80da-6731a9bcb98a" Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.254538 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.465174 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"17f9dd31-25b9-4b3f-82a6-12096f36308a","Type":"ContainerStarted","Data":"d3d3a91444d7c138c8b22bef2c48c12dddd3e442ddd566d18bc75262fad5ffc2"} Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.469278 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" podUID="76e79a91-7593-4b7a-bb1a-6396209cc424" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.469356 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" podUID="9f871e0b-e0d8-43a7-a251-9601cfcfd87a" Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.550431 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.659024 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.674405 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: W0202 10:56:11.677213 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827c472d_1762_4e1c_a096_2d48ca9af689.slice/crio-2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5 WatchSource:0}: Error finding container 2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5: Status 404 returned error can't find the container with id 2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5 Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.718337 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5"] Feb 02 10:56:11 crc kubenswrapper[4782]: W0202 10:56:11.889257 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb009ca1c_fc93_4724_9275_c44039256469.slice/crio-489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55 WatchSource:0}: Error finding container 489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55: Status 404 returned error can't find the container with id 489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55 Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.224322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zs65k"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.306446 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.315964 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.382953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"0d246705-dc07-488a-9288-59e2a16174fe\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383086 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"0d246705-dc07-488a-9288-59e2a16174fe\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383135 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383248 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"0d246705-dc07-488a-9288-59e2a16174fe\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383834 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config" (OuterVolumeSpecName: "config") pod "651c76ea-95cf-4ed1-80da-6731a9bcb98a" (UID: "651c76ea-95cf-4ed1-80da-6731a9bcb98a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383953 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config" (OuterVolumeSpecName: "config") pod "0d246705-dc07-488a-9288-59e2a16174fe" (UID: "0d246705-dc07-488a-9288-59e2a16174fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.384966 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d246705-dc07-488a-9288-59e2a16174fe" (UID: "0d246705-dc07-488a-9288-59e2a16174fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.392497 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk" (OuterVolumeSpecName: "kube-api-access-sbbjk") pod "0d246705-dc07-488a-9288-59e2a16174fe" (UID: "0d246705-dc07-488a-9288-59e2a16174fe"). InnerVolumeSpecName "kube-api-access-sbbjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.392782 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt" (OuterVolumeSpecName: "kube-api-access-2nwmt") pod "651c76ea-95cf-4ed1-80da-6731a9bcb98a" (UID: "651c76ea-95cf-4ed1-80da-6731a9bcb98a"). InnerVolumeSpecName "kube-api-access-2nwmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489166 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489217 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489233 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489255 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489270 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.507740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerStarted","Data":"3992ab93c4c5f2ed35c729b5437440045e9c79bb0c0183ea1413c7dbb7ffcc51"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.511297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" event={"ID":"651c76ea-95cf-4ed1-80da-6731a9bcb98a","Type":"ContainerDied","Data":"2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.511389 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.513398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerStarted","Data":"b5731da46b9909f62f299535fa86ed29a8dd25ea43d89f4988425d732dfa7580"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.515933 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.515940 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" event={"ID":"0d246705-dc07-488a-9288-59e2a16174fe","Type":"ContainerDied","Data":"412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.517562 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerStarted","Data":"2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.520961 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"9290020d34f1ea847ba07bed6da2f15ea2cbcad2ac62e51ff19f357df22d87c2"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.523670 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5" event={"ID":"b009ca1c-fc93-4724-9275-c44039256469","Type":"ContainerStarted","Data":"489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.527062 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerStarted","Data":"4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.538690 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerStarted","Data":"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.636369 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.644957 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.678794 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.688512 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.795475 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.838238 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d246705-dc07-488a-9288-59e2a16174fe" path="/var/lib/kubelet/pods/0d246705-dc07-488a-9288-59e2a16174fe/volumes" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.838748 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651c76ea-95cf-4ed1-80da-6731a9bcb98a" path="/var/lib/kubelet/pods/651c76ea-95cf-4ed1-80da-6731a9bcb98a/volumes" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.939937 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:56:13 crc kubenswrapper[4782]: I0202 10:56:13.548807 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8169f65-2d63-4127-8d23-ba6d56af1156","Type":"ContainerStarted","Data":"bab9c7bfb4ad70cda552743322558a16070e39f3e35163793fe9f227b8b8e3a4"} Feb 02 10:56:14 crc kubenswrapper[4782]: I0202 10:56:14.560084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"572fc7c8-9560-43d0-ba3e-d3f098494878","Type":"ContainerStarted","Data":"696cfa0f58b5075b291ae45620150f0197e5ce439872a66a1cfa55f9e45f7d13"} Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.391927 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kv4h8"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.393285 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.396561 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.408054 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kv4h8"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.484820 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-config\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.484872 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovn-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.484944 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlndm\" (UniqueName: \"kubernetes.io/projected/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-kube-api-access-vlndm\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.485111 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.485155 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-combined-ca-bundle\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.485241 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovs-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590333 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlndm\" (UniqueName: \"kubernetes.io/projected/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-kube-api-access-vlndm\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590739 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590769 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-combined-ca-bundle\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovs-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-config\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590946 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovn-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.591243 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovn-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.591249 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovs-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.592059 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-config\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.600231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.613411 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlndm\" (UniqueName: \"kubernetes.io/projected/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-kube-api-access-vlndm\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.624788 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-combined-ca-bundle\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.687631 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.718732 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.735242 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.742959 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.746580 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.748609 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.895317 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.895440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.896086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.896237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.986767 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.998171 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.003902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.004040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.004147 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.006948 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.007698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.021688 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.022817 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.033666 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.034880 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.046677 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.111848 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.112273 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126318 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126373 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.136571 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228054 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228111 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228154 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228216 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228260 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.229167 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.229785 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.233929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.234525 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.263264 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.469077 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.125231 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.140056 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260755 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"76e79a91-7593-4b7a-bb1a-6396209cc424\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260835 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260860 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260995 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"76e79a91-7593-4b7a-bb1a-6396209cc424\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.261087 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"76e79a91-7593-4b7a-bb1a-6396209cc424\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.261580 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config" (OuterVolumeSpecName: "config") pod "9f871e0b-e0d8-43a7-a251-9601cfcfd87a" (UID: "9f871e0b-e0d8-43a7-a251-9601cfcfd87a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.261955 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config" (OuterVolumeSpecName: "config") pod "76e79a91-7593-4b7a-bb1a-6396209cc424" (UID: "76e79a91-7593-4b7a-bb1a-6396209cc424"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.262524 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76e79a91-7593-4b7a-bb1a-6396209cc424" (UID: "76e79a91-7593-4b7a-bb1a-6396209cc424"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.262536 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f871e0b-e0d8-43a7-a251-9601cfcfd87a" (UID: "9f871e0b-e0d8-43a7-a251-9601cfcfd87a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.264596 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb" (OuterVolumeSpecName: "kube-api-access-w26jb") pod "9f871e0b-e0d8-43a7-a251-9601cfcfd87a" (UID: "9f871e0b-e0d8-43a7-a251-9601cfcfd87a"). InnerVolumeSpecName "kube-api-access-w26jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.270127 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844" (OuterVolumeSpecName: "kube-api-access-cc844") pod "76e79a91-7593-4b7a-bb1a-6396209cc424" (UID: "76e79a91-7593-4b7a-bb1a-6396209cc424"). InnerVolumeSpecName "kube-api-access-cc844". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364191 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364240 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364255 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364267 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364278 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364288 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.616297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" event={"ID":"76e79a91-7593-4b7a-bb1a-6396209cc424","Type":"ContainerDied","Data":"bf6952e060684b89e023f4574112773aed8ccdabbd164bcea1f68ba05b888020"} Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.616313 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.617706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" event={"ID":"9f871e0b-e0d8-43a7-a251-9601cfcfd87a","Type":"ContainerDied","Data":"b423e53936fb052435d6af130cac71bf078bb138f7f10831bf50eccca562a831"} Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.617762 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.673207 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.681847 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.725475 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.734034 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.456024 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kv4h8"] Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.636420 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kv4h8" event={"ID":"c9cb1af6-ff01-4474-ad02-56938ef7e5a1","Type":"ContainerStarted","Data":"036122d3db469764f7b8ba7026aed4fe368a51bf34e6548f62c28d3e215e45c0"} Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.780304 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.841819 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e79a91-7593-4b7a-bb1a-6396209cc424" path="/var/lib/kubelet/pods/76e79a91-7593-4b7a-bb1a-6396209cc424/volumes" Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.842261 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f871e0b-e0d8-43a7-a251-9601cfcfd87a" path="/var/lib/kubelet/pods/9f871e0b-e0d8-43a7-a251-9601cfcfd87a/volumes" Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.923199 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.667110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerStarted","Data":"9396400999cf874396d9e7d1690f8cf7ae7be01918ca48a4abec931d8d010c1c"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.683611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerStarted","Data":"8ea80d7a6697910033180c13651a4f1a0db2ca17330d6dc67d29ea3d1d0e4318"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.703986 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"1064d4e5f9b67709bda7004f004b01a82a6da13999d6afdf338b696d7bf74ef8"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.710345 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerStarted","Data":"cb78cf6d16099541f3ebf963695779d122ec9f7aca28fae1d462165f83b98ae3"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.717705 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerStarted","Data":"70e38c18e7eeadf50540fc012954a93846a0fd6be83565a64c6ee300da9f11db"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.719106 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"17f9dd31-25b9-4b3f-82a6-12096f36308a","Type":"ContainerStarted","Data":"9df6fc7083c621ea0ea773e2ff44d99fc6615fb81d0f78c013aa7fc82dfa4a55"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.719678 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.779408 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.586281823 podStartE2EDuration="28.77938771s" podCreationTimestamp="2026-02-02 10:55:53 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.263790542 +0000 UTC m=+1051.147983258" lastFinishedPulling="2026-02-02 10:56:19.456896429 +0000 UTC m=+1059.341089145" observedRunningTime="2026-02-02 10:56:21.773491421 +0000 UTC m=+1061.657684137" watchObservedRunningTime="2026-02-02 10:56:21.77938771 +0000 UTC m=+1061.663580426" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.731844 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5" event={"ID":"b009ca1c-fc93-4724-9275-c44039256469","Type":"ContainerStarted","Data":"7d7132de1e2ab4b5ab034ed6f9ed17821d273b1a3b5cfaeb17d0ac4ade0c26ba"} Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.733178 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.735507 4782 generic.go:334] "Generic (PLEG): container finished" podID="e91c0f3d-db81-453d-ad0e-30aeadb66206" containerID="1064d4e5f9b67709bda7004f004b01a82a6da13999d6afdf338b696d7bf74ef8" exitCode=0 Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.735591 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerDied","Data":"1064d4e5f9b67709bda7004f004b01a82a6da13999d6afdf338b696d7bf74ef8"} Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.750900 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8169f65-2d63-4127-8d23-ba6d56af1156","Type":"ContainerStarted","Data":"7eaffd15efb32899894661b48c4a227c87b82d3b2a195818ee62a977ae85d95a"} Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.768146 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sv8l5" podStartSLOduration=14.106884231 podStartE2EDuration="22.768119054s" podCreationTimestamp="2026-02-02 10:56:00 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.891617698 +0000 UTC m=+1051.775810414" lastFinishedPulling="2026-02-02 10:56:20.552852531 +0000 UTC m=+1060.437045237" observedRunningTime="2026-02-02 10:56:22.754405502 +0000 UTC m=+1062.638598218" watchObservedRunningTime="2026-02-02 10:56:22.768119054 +0000 UTC m=+1062.652311770" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.951786 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.952116 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.952162 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:56:23 crc kubenswrapper[4782]: I0202 10:56:23.758230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"572fc7c8-9560-43d0-ba3e-d3f098494878","Type":"ContainerStarted","Data":"d48966c114b9884cfd8feccb0d76128e705dc668d4df7b9b7bc5eca7672d33b8"} Feb 02 10:56:23 crc kubenswrapper[4782]: I0202 10:56:23.758842 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:56:23 crc kubenswrapper[4782]: I0202 10:56:23.758904 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1" gracePeriod=600 Feb 02 10:56:24 crc kubenswrapper[4782]: I0202 10:56:24.767088 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1" exitCode=0 Feb 02 10:56:24 crc kubenswrapper[4782]: I0202 10:56:24.767146 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1"} Feb 02 10:56:24 crc kubenswrapper[4782]: I0202 10:56:24.767423 4782 scope.go:117] "RemoveContainer" containerID="2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c" Feb 02 10:56:27 crc kubenswrapper[4782]: I0202 10:56:27.791013 4782 generic.go:334] "Generic (PLEG): container finished" podID="8c2fe596-a023-4206-979f-7f2e7bc81d0e" containerID="8ea80d7a6697910033180c13651a4f1a0db2ca17330d6dc67d29ea3d1d0e4318" exitCode=0 Feb 02 10:56:27 crc kubenswrapper[4782]: I0202 10:56:27.792209 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerDied","Data":"8ea80d7a6697910033180c13651a4f1a0db2ca17330d6dc67d29ea3d1d0e4318"} Feb 02 10:56:28 crc kubenswrapper[4782]: I0202 10:56:28.810910 4782 generic.go:334] "Generic (PLEG): container finished" podID="827c472d-1762-4e1c-a096-2d48ca9af689" containerID="9396400999cf874396d9e7d1690f8cf7ae7be01918ca48a4abec931d8d010c1c" exitCode=0 Feb 02 10:56:28 crc kubenswrapper[4782]: I0202 10:56:28.810994 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerDied","Data":"9396400999cf874396d9e7d1690f8cf7ae7be01918ca48a4abec931d8d010c1c"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.079882 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.820509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kv4h8" event={"ID":"c9cb1af6-ff01-4474-ad02-56938ef7e5a1","Type":"ContainerStarted","Data":"f9bb541b87910adc8791c6046b89a907b564a12ce96dc3136d94ec69254fdedb"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.823538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"572fc7c8-9560-43d0-ba3e-d3f098494878","Type":"ContainerStarted","Data":"c57bb4ab461e6a4f9be2fd4f5dff275a79d222ed1d1f9ff74b11d79362a597db"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.826320 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerStarted","Data":"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.826521 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.829670 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerStarted","Data":"ba406c9b1ae0b8574bd18cda4c3b424f6233c669e45d79dc02c388bf5c2b83f4"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832529 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"156a0ea548ccaf29a093e66575d64025d2c15fe46cf273442ca339bb25a93f67"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832566 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"bb6745b2ce15f1adee676d611a3588d4088747b1ea6177cfe6e1b1770e3c84e9"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832603 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.835738 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" exitCode=0 Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.835804 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerDied","Data":"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.851398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.854237 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kv4h8" podStartSLOduration=4.811381827 podStartE2EDuration="12.854215102s" podCreationTimestamp="2026-02-02 10:56:17 +0000 UTC" firstStartedPulling="2026-02-02 10:56:20.576336854 +0000 UTC m=+1060.460529570" lastFinishedPulling="2026-02-02 10:56:28.619170109 +0000 UTC m=+1068.503362845" observedRunningTime="2026-02-02 10:56:29.840798558 +0000 UTC m=+1069.724991274" watchObservedRunningTime="2026-02-02 10:56:29.854215102 +0000 UTC m=+1069.738407818" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.857802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerStarted","Data":"60a5c5ce3a9d7c529f9f2e9eb361ccf1b38f85509256a325a169fac131c09375"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.862415 4782 generic.go:334] "Generic (PLEG): container finished" podID="ede109fe-b194-4a02-992d-f1132849fc0d" containerID="ab258d10ba96d70b4cfb3ed122e2e498a9dd3427d5d89bbbbf656b5efa7359c1" exitCode=0 Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.862721 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerDied","Data":"ab258d10ba96d70b4cfb3ed122e2e498a9dd3427d5d89bbbbf656b5efa7359c1"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.877412 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zs65k" podStartSLOduration=22.237304784 podStartE2EDuration="29.877391395s" podCreationTimestamp="2026-02-02 10:56:00 +0000 UTC" firstStartedPulling="2026-02-02 10:56:12.248309956 +0000 UTC m=+1052.132502672" lastFinishedPulling="2026-02-02 10:56:19.888396567 +0000 UTC m=+1059.772589283" observedRunningTime="2026-02-02 10:56:29.871817576 +0000 UTC m=+1069.756010292" watchObservedRunningTime="2026-02-02 10:56:29.877391395 +0000 UTC m=+1069.761584111" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.885029 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8169f65-2d63-4127-8d23-ba6d56af1156","Type":"ContainerStarted","Data":"824c87b9036397c463fb697081b4e9a2bf392cdba163c52e8dfd267e828e7f40"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.911906 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.138690052 podStartE2EDuration="37.911889813s" podCreationTimestamp="2026-02-02 10:55:52 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.706545022 +0000 UTC m=+1051.590737738" lastFinishedPulling="2026-02-02 10:56:19.479744783 +0000 UTC m=+1059.363937499" observedRunningTime="2026-02-02 10:56:29.899753495 +0000 UTC m=+1069.783946211" watchObservedRunningTime="2026-02-02 10:56:29.911889813 +0000 UTC m=+1069.796082529" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.013324 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.450336777 podStartE2EDuration="32.013308855s" podCreationTimestamp="2026-02-02 10:55:58 +0000 UTC" firstStartedPulling="2026-02-02 10:56:14.063742087 +0000 UTC m=+1053.947934803" lastFinishedPulling="2026-02-02 10:56:28.626714165 +0000 UTC m=+1068.510906881" observedRunningTime="2026-02-02 10:56:30.010902206 +0000 UTC m=+1069.895094922" watchObservedRunningTime="2026-02-02 10:56:30.013308855 +0000 UTC m=+1069.897501571" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.067320 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.163357071 podStartE2EDuration="35.06730217s" podCreationTimestamp="2026-02-02 10:55:55 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.697887594 +0000 UTC m=+1051.582080310" lastFinishedPulling="2026-02-02 10:56:28.601832693 +0000 UTC m=+1068.486025409" observedRunningTime="2026-02-02 10:56:30.055929484 +0000 UTC m=+1069.940122200" watchObservedRunningTime="2026-02-02 10:56:30.06730217 +0000 UTC m=+1069.951494886" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.124899 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.124940 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.178012 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.295693368 podStartE2EDuration="39.177988927s" podCreationTimestamp="2026-02-02 10:55:51 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.697559725 +0000 UTC m=+1051.581752441" lastFinishedPulling="2026-02-02 10:56:20.579855264 +0000 UTC m=+1060.464048000" observedRunningTime="2026-02-02 10:56:30.120983716 +0000 UTC m=+1070.005176432" watchObservedRunningTime="2026-02-02 10:56:30.177988927 +0000 UTC m=+1070.062181643" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.183688 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.363411903 podStartE2EDuration="29.18366544s" podCreationTimestamp="2026-02-02 10:56:01 +0000 UTC" firstStartedPulling="2026-02-02 10:56:12.829902659 +0000 UTC m=+1052.714095375" lastFinishedPulling="2026-02-02 10:56:28.650156196 +0000 UTC m=+1068.534348912" observedRunningTime="2026-02-02 10:56:30.176901576 +0000 UTC m=+1070.061094282" watchObservedRunningTime="2026-02-02 10:56:30.18366544 +0000 UTC m=+1070.067858166" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.470081 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.892819 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerStarted","Data":"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308"} Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.893751 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.896278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerStarted","Data":"79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a"} Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.896310 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.920375 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" podStartSLOduration=7.777998548 podStartE2EDuration="13.920357661s" podCreationTimestamp="2026-02-02 10:56:17 +0000 UTC" firstStartedPulling="2026-02-02 10:56:21.575808454 +0000 UTC m=+1061.460001180" lastFinishedPulling="2026-02-02 10:56:27.718167577 +0000 UTC m=+1067.602360293" observedRunningTime="2026-02-02 10:56:30.910835789 +0000 UTC m=+1070.795028505" watchObservedRunningTime="2026-02-02 10:56:30.920357661 +0000 UTC m=+1070.804550387" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.932759 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podStartSLOduration=12.209303698 podStartE2EDuration="13.932742116s" podCreationTimestamp="2026-02-02 10:56:17 +0000 UTC" firstStartedPulling="2026-02-02 10:56:20.823653021 +0000 UTC m=+1060.707845737" lastFinishedPulling="2026-02-02 10:56:22.547091439 +0000 UTC m=+1062.431284155" observedRunningTime="2026-02-02 10:56:30.928945047 +0000 UTC m=+1070.813137763" watchObservedRunningTime="2026-02-02 10:56:30.932742116 +0000 UTC m=+1070.816934832" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.944667 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: E0202 10:56:32.401906 4782 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:35332->38.102.83.147:40373: write tcp 38.102.83.147:35332->38.102.83.147:40373: write: broken pipe Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.727965 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.728043 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.742745 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.742787 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.786626 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.967894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.134165 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.135592 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.139776 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.139969 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.142970 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mvq24" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.183811 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.191869 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214487 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-config\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214553 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghs5\" (UniqueName: \"kubernetes.io/projected/7a65af67-822b-44b8-a2be-a132de866a2e-kube-api-access-xghs5\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214628 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214676 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214728 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-scripts\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214761 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316395 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316454 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-config\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316572 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xghs5\" (UniqueName: \"kubernetes.io/projected/7a65af67-822b-44b8-a2be-a132de866a2e-kube-api-access-xghs5\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316637 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-scripts\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.318018 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-config\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.318146 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-scripts\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.318597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.328309 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.329536 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.347402 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.372460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xghs5\" (UniqueName: \"kubernetes.io/projected/7a65af67-822b-44b8-a2be-a132de866a2e-kube-api-access-xghs5\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.455042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.120650 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:56:34 crc kubenswrapper[4782]: W0202 10:56:34.135036 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a65af67_822b_44b8_a2be_a132de866a2e.slice/crio-0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4 WatchSource:0}: Error finding container 0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4: Status 404 returned error can't find the container with id 0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4 Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.156438 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.156499 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.927202 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7a65af67-822b-44b8-a2be-a132de866a2e","Type":"ContainerStarted","Data":"0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4"} Feb 02 10:56:35 crc kubenswrapper[4782]: I0202 10:56:35.168468 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 10:56:35 crc kubenswrapper[4782]: I0202 10:56:35.267140 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.125402 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.753811 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.863536 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.961895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7a65af67-822b-44b8-a2be-a132de866a2e","Type":"ContainerStarted","Data":"41fb462aa96c1db4145f05bb66a1ee1ffadd4a3ae7ec4774cfcdb2c7f533869e"} Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.961960 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7a65af67-822b-44b8-a2be-a132de866a2e","Type":"ContainerStarted","Data":"9ffa8fd5ae6a1851ac83572411be2d24ad12f2d84ad32dd2ea40289a0010c720"} Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.986190 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.208092221 podStartE2EDuration="3.986173532s" podCreationTimestamp="2026-02-02 10:56:33 +0000 UTC" firstStartedPulling="2026-02-02 10:56:34.137589837 +0000 UTC m=+1074.021782553" lastFinishedPulling="2026-02-02 10:56:35.915671148 +0000 UTC m=+1075.799863864" observedRunningTime="2026-02-02 10:56:36.983477685 +0000 UTC m=+1076.867670401" watchObservedRunningTime="2026-02-02 10:56:36.986173532 +0000 UTC m=+1076.870366248" Feb 02 10:56:37 crc kubenswrapper[4782]: I0202 10:56:37.969257 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.114862 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.470773 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.527459 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.981376 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" containerID="cri-o://31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" gracePeriod=10 Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.376364 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.381657 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.388288 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.403590 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.405812 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.421844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.422172 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.425878 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.498282 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523378 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523462 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523513 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.524416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.567585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.624819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.625316 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.627044 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.653589 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.701762 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.738408 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.797201 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831372 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831496 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831660 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.851109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d" (OuterVolumeSpecName: "kube-api-access-8x95d") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "kube-api-access-8x95d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.904026 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config" (OuterVolumeSpecName: "config") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.904968 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.910695 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.933937 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.933982 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.933994 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.934007 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988550 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" exitCode=0 Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerDied","Data":"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308"} Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988608 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerDied","Data":"cb78cf6d16099541f3ebf963695779d122ec9f7aca28fae1d462165f83b98ae3"} Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988626 4782 scope.go:117] "RemoveContainer" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988966 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.020486 4782 scope.go:117] "RemoveContainer" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.030515 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.039167 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.043315 4782 scope.go:117] "RemoveContainer" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" Feb 02 10:56:40 crc kubenswrapper[4782]: E0202 10:56:40.043790 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308\": container with ID starting with 31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308 not found: ID does not exist" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.043828 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308"} err="failed to get container status \"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308\": rpc error: code = NotFound desc = could not find container \"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308\": container with ID starting with 31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308 not found: ID does not exist" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.043852 4782 scope.go:117] "RemoveContainer" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" Feb 02 10:56:40 crc kubenswrapper[4782]: E0202 10:56:40.044222 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5\": container with ID starting with 4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5 not found: ID does not exist" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.044246 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5"} err="failed to get container status \"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5\": rpc error: code = NotFound desc = could not find container \"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5\": container with ID starting with 4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5 not found: ID does not exist" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.250085 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.346087 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.833172 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" path="/var/lib/kubelet/pods/ce7bfaff-9623-45e1-a146-6ea2e85691b8/volumes" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.011518 4782 generic.go:334] "Generic (PLEG): container finished" podID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerID="922a5052c537ca60debaeb30c310ad62b9d6cc2296c5f5cb93deeef6d784a0c2" exitCode=0 Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.011589 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-377c-account-create-update-4zm4s" event={"ID":"bfde9ba3-fda5-496b-8ee5-52430e61f02a","Type":"ContainerDied","Data":"922a5052c537ca60debaeb30c310ad62b9d6cc2296c5f5cb93deeef6d784a0c2"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.011617 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-377c-account-create-update-4zm4s" event={"ID":"bfde9ba3-fda5-496b-8ee5-52430e61f02a","Type":"ContainerStarted","Data":"6fa69f75d3ab61cbf3d27efd1496c902da7a79fa8b733a929d055ffcecfa49f8"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.014969 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerID="2a0dfecd12eefed7e04fa4bd8706afbc2a21a95326fc9fb0c721694048febe14" exitCode=0 Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.015191 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l6d9n" event={"ID":"ce57fffc-4d75-495f-b7ed-28676054f90e","Type":"ContainerDied","Data":"2a0dfecd12eefed7e04fa4bd8706afbc2a21a95326fc9fb0c721694048febe14"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.015246 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l6d9n" event={"ID":"ce57fffc-4d75-495f-b7ed-28676054f90e","Type":"ContainerStarted","Data":"4ee47379cf84f1e497897eaa6502d9f20f12d223c36ef2bb6fab28916234cc24"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.022765 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:41 crc kubenswrapper[4782]: E0202 10:56:41.023272 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.023337 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" Feb 02 10:56:41 crc kubenswrapper[4782]: E0202 10:56:41.023406 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="init" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.023478 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="init" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.023717 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.024924 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.032126 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.034148 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.164443 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.164545 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.266167 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.266242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.267015 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.286833 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.359431 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.818384 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:41 crc kubenswrapper[4782]: W0202 10:56:41.823020 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ee52cc_7cc9_46d3_aed7_67cdc48551c7.slice/crio-5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43 WatchSource:0}: Error finding container 5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43: Status 404 returned error can't find the container with id 5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43 Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.024626 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerStarted","Data":"ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6"} Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.025237 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerStarted","Data":"5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43"} Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.051151 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4d24k" podStartSLOduration=2.051132633 podStartE2EDuration="2.051132633s" podCreationTimestamp="2026-02-02 10:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:42.044510084 +0000 UTC m=+1081.928702800" watchObservedRunningTime="2026-02-02 10:56:42.051132633 +0000 UTC m=+1081.935325349" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.414403 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.426967 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483731 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483886 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"ce57fffc-4d75-495f-b7ed-28676054f90e\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"ce57fffc-4d75-495f-b7ed-28676054f90e\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483979 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.484891 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfde9ba3-fda5-496b-8ee5-52430e61f02a" (UID: "bfde9ba3-fda5-496b-8ee5-52430e61f02a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.484949 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce57fffc-4d75-495f-b7ed-28676054f90e" (UID: "ce57fffc-4d75-495f-b7ed-28676054f90e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.489761 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd" (OuterVolumeSpecName: "kube-api-access-cqnpd") pod "bfde9ba3-fda5-496b-8ee5-52430e61f02a" (UID: "bfde9ba3-fda5-496b-8ee5-52430e61f02a"). InnerVolumeSpecName "kube-api-access-cqnpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.506447 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l" (OuterVolumeSpecName: "kube-api-access-hvg7l") pod "ce57fffc-4d75-495f-b7ed-28676054f90e" (UID: "ce57fffc-4d75-495f-b7ed-28676054f90e"). InnerVolumeSpecName "kube-api-access-hvg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.585989 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.586024 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.586036 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.586044 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.031236 4782 generic.go:334] "Generic (PLEG): container finished" podID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerID="ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6" exitCode=0 Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.031308 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerDied","Data":"ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6"} Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.033430 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-377c-account-create-update-4zm4s" event={"ID":"bfde9ba3-fda5-496b-8ee5-52430e61f02a","Type":"ContainerDied","Data":"6fa69f75d3ab61cbf3d27efd1496c902da7a79fa8b733a929d055ffcecfa49f8"} Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.033455 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa69f75d3ab61cbf3d27efd1496c902da7a79fa8b733a929d055ffcecfa49f8" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.033489 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.035631 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l6d9n" event={"ID":"ce57fffc-4d75-495f-b7ed-28676054f90e","Type":"ContainerDied","Data":"4ee47379cf84f1e497897eaa6502d9f20f12d223c36ef2bb6fab28916234cc24"} Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.035679 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.035685 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee47379cf84f1e497897eaa6502d9f20f12d223c36ef2bb6fab28916234cc24" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.670547 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 10:56:43 crc kubenswrapper[4782]: E0202 10:56:43.671438 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerName="mariadb-database-create" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671546 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerName="mariadb-database-create" Feb 02 10:56:43 crc kubenswrapper[4782]: E0202 10:56:43.671614 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerName="mariadb-account-create-update" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671689 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerName="mariadb-account-create-update" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671900 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerName="mariadb-database-create" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671973 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerName="mariadb-account-create-update" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.672502 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.684578 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.810175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.810540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.823948 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.824993 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.826838 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.833271 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.913070 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915122 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915333 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915428 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915504 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.944231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.987769 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.017230 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.017370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.018339 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.042091 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.130705 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.131927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.137453 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.144377 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.220731 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.220850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.260066 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.262149 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.269945 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.274868 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329386 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329446 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329523 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.331381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.357211 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.436225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.436284 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.437383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.453529 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.463842 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.480206 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.504550 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.541762 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.542032 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.543106 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" (UID: "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.545254 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz" (OuterVolumeSpecName: "kube-api-access-qw7cz") pod "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" (UID: "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7"). InnerVolumeSpecName "kube-api-access-qw7cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.591761 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.620985 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 10:56:44 crc kubenswrapper[4782]: E0202 10:56:44.621743 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerName="mariadb-account-create-update" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.621767 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerName="mariadb-account-create-update" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.622294 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerName="mariadb-account-create-update" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.622800 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.629254 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.629451 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-57vkh" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.645491 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.645523 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.647101 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.746648 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.746984 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.747006 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.747037 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.764384 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848186 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848233 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848263 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848600 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.855094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.864632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.873090 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.879321 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.985112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.065264 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerStarted","Data":"a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.067749 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerStarted","Data":"249c5a98a778b98fa438ebd5fb9b61464cc131eb48f90e0afab4c1117206b06b"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.071490 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.099843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerStarted","Data":"6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.100067 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerStarted","Data":"affd51ba873c2ac717264a2c48b401c76e91abb5ac07f259af5a91e5d4c528f2"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.107254 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerDied","Data":"5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.107435 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43" Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.107570 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.109465 4782 generic.go:334] "Generic (PLEG): container finished" podID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerID="4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb" exitCode=0 Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.109528 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerDied","Data":"4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.111098 4782 generic.go:334] "Generic (PLEG): container finished" podID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" exitCode=0 Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.111124 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerDied","Data":"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.116067 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0259-account-create-update-n5p89" podStartSLOduration=2.11605171 podStartE2EDuration="2.11605171s" podCreationTimestamp="2026-02-02 10:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:45.095885673 +0000 UTC m=+1084.980078389" watchObservedRunningTime="2026-02-02 10:56:45.11605171 +0000 UTC m=+1085.000244426" Feb 02 10:56:45 crc kubenswrapper[4782]: W0202 10:56:45.265930 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb358cda4_3c47_4270_ada7_f7653d5da96f.slice/crio-1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed WatchSource:0}: Error finding container 1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed: Status 404 returned error can't find the container with id 1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.326794 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.755129 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 10:56:45 crc kubenswrapper[4782]: W0202 10:56:45.766291 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f885e8a_3dc8_4c07_ae3c_4c8ab072abc0.slice/crio-4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54 WatchSource:0}: Error finding container 4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54: Status 404 returned error can't find the container with id 4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.118116 4782 generic.go:334] "Generic (PLEG): container finished" podID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerID="91ff00aa29fb6af4c20c4ab6c7010da35390db314e4a9d0dc6101bd74c8cfe7c" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.118186 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6cg8m" event={"ID":"1db12436-a377-40c9-bc4e-9fe301b0b4cb","Type":"ContainerDied","Data":"91ff00aa29fb6af4c20c4ab6c7010da35390db314e4a9d0dc6101bd74c8cfe7c"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.118212 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6cg8m" event={"ID":"1db12436-a377-40c9-bc4e-9fe301b0b4cb","Type":"ContainerStarted","Data":"df41be568da50845f36d15cb17ab4937618a8a95d19a89ea0f84aa0e72e17800"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.120361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerStarted","Data":"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.120667 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.122905 4782 generic.go:334] "Generic (PLEG): container finished" podID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerID="a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.122954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerDied","Data":"a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.124344 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerStarted","Data":"4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.125858 4782 generic.go:334] "Generic (PLEG): container finished" podID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerID="6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.125911 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerDied","Data":"6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.127962 4782 generic.go:334] "Generic (PLEG): container finished" podID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerID="d918711ae10925784d0ab83a02dc8d40b553f98643dc5469d54bc38912d8020e" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.128002 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2124-account-create-update-npd9h" event={"ID":"b358cda4-3c47-4270-ada7-f7653d5da96f","Type":"ContainerDied","Data":"d918711ae10925784d0ab83a02dc8d40b553f98643dc5469d54bc38912d8020e"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.128018 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2124-account-create-update-npd9h" event={"ID":"b358cda4-3c47-4270-ada7-f7653d5da96f","Type":"ContainerStarted","Data":"1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.130337 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerStarted","Data":"8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.130581 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.171711 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.518983218 podStartE2EDuration="57.171697269s" podCreationTimestamp="2026-02-02 10:55:49 +0000 UTC" firstStartedPulling="2026-02-02 10:55:52.262822875 +0000 UTC m=+1032.147015591" lastFinishedPulling="2026-02-02 10:56:10.915536926 +0000 UTC m=+1050.799729642" observedRunningTime="2026-02-02 10:56:46.166688736 +0000 UTC m=+1086.050881452" watchObservedRunningTime="2026-02-02 10:56:46.171697269 +0000 UTC m=+1086.055889985" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.277147 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.075167277 podStartE2EDuration="57.277124336s" podCreationTimestamp="2026-02-02 10:55:49 +0000 UTC" firstStartedPulling="2026-02-02 10:55:51.707113722 +0000 UTC m=+1031.591306438" lastFinishedPulling="2026-02-02 10:56:10.909070781 +0000 UTC m=+1050.793263497" observedRunningTime="2026-02-02 10:56:46.26678492 +0000 UTC m=+1086.150977636" watchObservedRunningTime="2026-02-02 10:56:46.277124336 +0000 UTC m=+1086.161317052" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.528869 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.687126 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"d561a4a7-bb99-43c6-859e-e3269a35a073\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.687273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"d561a4a7-bb99-43c6-859e-e3269a35a073\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.688191 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d561a4a7-bb99-43c6-859e-e3269a35a073" (UID: "d561a4a7-bb99-43c6-859e-e3269a35a073"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.696866 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt" (OuterVolumeSpecName: "kube-api-access-mvgrt") pod "d561a4a7-bb99-43c6-859e-e3269a35a073" (UID: "d561a4a7-bb99-43c6-859e-e3269a35a073"). InnerVolumeSpecName "kube-api-access-mvgrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.790783 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.790829 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.139080 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.139616 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerDied","Data":"affd51ba873c2ac717264a2c48b401c76e91abb5ac07f259af5a91e5d4c528f2"} Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.139688 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="affd51ba873c2ac717264a2c48b401c76e91abb5ac07f259af5a91e5d4c528f2" Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.472172 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.480328 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.127014 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.137973 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.150597 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.160963 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerDied","Data":"249c5a98a778b98fa438ebd5fb9b61464cc131eb48f90e0afab4c1117206b06b"} Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.161001 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249c5a98a778b98fa438ebd5fb9b61464cc131eb48f90e0afab4c1117206b06b" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.161021 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.162515 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2124-account-create-update-npd9h" event={"ID":"b358cda4-3c47-4270-ada7-f7653d5da96f","Type":"ContainerDied","Data":"1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed"} Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.162547 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.162563 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.163703 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6cg8m" event={"ID":"1db12436-a377-40c9-bc4e-9fe301b0b4cb","Type":"ContainerDied","Data":"df41be568da50845f36d15cb17ab4937618a8a95d19a89ea0f84aa0e72e17800"} Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.163719 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df41be568da50845f36d15cb17ab4937618a8a95d19a89ea0f84aa0e72e17800" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.163753 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.219205 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.219271 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.221025 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1db12436-a377-40c9-bc4e-9fe301b0b4cb" (UID: "1db12436-a377-40c9-bc4e-9fe301b0b4cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.237924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r" (OuterVolumeSpecName: "kube-api-access-vj25r") pod "1db12436-a377-40c9-bc4e-9fe301b0b4cb" (UID: "1db12436-a377-40c9-bc4e-9fe301b0b4cb"). InnerVolumeSpecName "kube-api-access-vj25r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.320935 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"b358cda4-3c47-4270-ada7-f7653d5da96f\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321180 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"b358cda4-3c47-4270-ada7-f7653d5da96f\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321207 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321664 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321684 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.322566 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b358cda4-3c47-4270-ada7-f7653d5da96f" (UID: "b358cda4-3c47-4270-ada7-f7653d5da96f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.322847 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80dad8de-560e-4ff5-b196-aa0bbbc2be15" (UID: "80dad8de-560e-4ff5-b196-aa0bbbc2be15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.324946 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj" (OuterVolumeSpecName: "kube-api-access-ng8tj") pod "80dad8de-560e-4ff5-b196-aa0bbbc2be15" (UID: "80dad8de-560e-4ff5-b196-aa0bbbc2be15"). InnerVolumeSpecName "kube-api-access-ng8tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.325623 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62" (OuterVolumeSpecName: "kube-api-access-knc62") pod "b358cda4-3c47-4270-ada7-f7653d5da96f" (UID: "b358cda4-3c47-4270-ada7-f7653d5da96f"). InnerVolumeSpecName "kube-api-access-knc62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423138 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423188 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423201 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423212 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.857306 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" path="/var/lib/kubelet/pods/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7/volumes" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.478444 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479167 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479182 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479215 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479223 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479242 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479250 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479282 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479289 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479447 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479464 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479478 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479489 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.480083 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.482313 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.505596 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.591208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.591263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.693422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.693492 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.694375 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.714965 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.796747 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:53 crc kubenswrapper[4782]: I0202 10:56:53.555849 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 10:56:56 crc kubenswrapper[4782]: I0202 10:56:56.009346 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sv8l5" podUID="b009ca1c-fc93-4724-9275-c44039256469" containerName="ovn-controller" probeResult="failure" output=< Feb 02 10:56:56 crc kubenswrapper[4782]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 10:56:56 crc kubenswrapper[4782]: > Feb 02 10:57:00 crc kubenswrapper[4782]: I0202 10:57:00.703804 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.107824 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.253753 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.290911 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sv8l5" podUID="b009ca1c-fc93-4724-9275-c44039256469" containerName="ovn-controller" probeResult="failure" output=< Feb 02 10:57:01 crc kubenswrapper[4782]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 10:57:01 crc kubenswrapper[4782]: > Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.295208 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.347779 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.348832 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.371011 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.453375 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.453439 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.555003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.555261 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.555568 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.556450 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.557473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.577664 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.589576 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.591681 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.622003 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.656769 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.656990 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.664412 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.664585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.683368 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: E0202 10:57:01.717055 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 02 10:57:01 crc kubenswrapper[4782]: E0202 10:57:01.717204 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xmdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-bwx58_openstack(1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:57:01 crc kubenswrapper[4782]: E0202 10:57:01.720714 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-bwx58" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.757970 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758020 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758226 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.759308 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.800969 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.801985 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.836610 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.838111 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859151 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859409 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859447 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859574 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859608 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859631 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859682 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859981 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.860033 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.860693 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.861222 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.862148 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.865275 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.871980 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960691 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960789 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960872 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960900 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.964926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.965932 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.046612 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062576 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062695 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062756 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062786 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.063819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.063822 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.112204 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.115866 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.116807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121109 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121331 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121518 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121773 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.139223 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.156211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.161979 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.235619 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.242008 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.243200 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.249097 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.252709 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.266366 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.266434 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.266486 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: E0202 10:57:02.341089 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-bwx58" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367700 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367778 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367834 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367862 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367907 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.380720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.385364 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.412421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.441543 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.470871 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.471320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.473382 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.511869 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.513873 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.518751 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.520778 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.565718 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.595133 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.676107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.676208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.782360 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.782966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.783853 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.858845 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.860004 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.974958 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.320946 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.332346 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.348218 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q97pt" event={"ID":"68e5ac2b-72a8-46be-839a-fe639916a32e","Type":"ContainerStarted","Data":"0a50f00bb9f097f6fa3bdc79dd91c54109f518e4daed13ca669b1c0b2aa64a56"} Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.350999 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6jdgj" event={"ID":"29024188-b374-45b7-ad85-b2d4ca88b485","Type":"ContainerStarted","Data":"59931c74c238f44b10e52c4d20d13f519cae8b3cf2b301df562ef56ffaee122d"} Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.362910 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 10:57:03 crc kubenswrapper[4782]: W0202 10:57:03.370811 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9cc96ce_182b_4231_a5e9_10197e083077.slice/crio-9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622 WatchSource:0}: Error finding container 9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622: Status 404 returned error can't find the container with id 9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622 Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.603486 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.644284 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.700154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.723706 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.779310 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 10:57:03 crc kubenswrapper[4782]: W0202 10:57:03.784979 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod843d8da2_ab8c_4938_be4b_aa67af531e1e.slice/crio-e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce WatchSource:0}: Error finding container e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce: Status 404 returned error can't find the container with id e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.363113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerStarted","Data":"642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.363480 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerStarted","Data":"cf9abdd02e5074feef030e5dd66f4f015eb1117f9a0e9921a5c0f13149d8cfb5"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.365093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerStarted","Data":"e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.367188 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerStarted","Data":"8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.367236 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerStarted","Data":"03814b857f1ea9d519135baa096d1211b12fbe8c1aa5ce0a643e569e44214fa5"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.371967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerStarted","Data":"f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.372021 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerStarted","Data":"4922ee8804a4e7e6bcbb57aadb57f32946333f0f7b3f9634c0cd775bf2cae365"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.376089 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6jdgj" event={"ID":"29024188-b374-45b7-ad85-b2d4ca88b485","Type":"ContainerDied","Data":"9024b9d44f2a9c5bd7aaa4dc9abd2a12f77d4a6bdfae488ee552a49cc6449554"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.375915 4782 generic.go:334] "Generic (PLEG): container finished" podID="29024188-b374-45b7-ad85-b2d4ca88b485" containerID="9024b9d44f2a9c5bd7aaa4dc9abd2a12f77d4a6bdfae488ee552a49cc6449554" exitCode=0 Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.379248 4782 generic.go:334] "Generic (PLEG): container finished" podID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerID="86c67676caca480b43ace8b3b556dc1c7777a8a4b569eb0de34ba6545c1ccf6c" exitCode=0 Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.379343 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xzm82" event={"ID":"b78c9d8b-0793-4e57-8a3d-ba7303f12d37","Type":"ContainerDied","Data":"86c67676caca480b43ace8b3b556dc1c7777a8a4b569eb0de34ba6545c1ccf6c"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.379380 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xzm82" event={"ID":"b78c9d8b-0793-4e57-8a3d-ba7303f12d37","Type":"ContainerStarted","Data":"96a9faaddad4cf88e7bfd92da474c6e44adac17cad51fedc68fc17de29153d22"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.381266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerStarted","Data":"be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.381305 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerStarted","Data":"9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.383409 4782 generic.go:334] "Generic (PLEG): container finished" podID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerID="469ae18dd42598dba552dffdd5607faf35c16e63cd9d3d0f900d45cc0954f86f" exitCode=0 Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.383585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q97pt" event={"ID":"68e5ac2b-72a8-46be-839a-fe639916a32e","Type":"ContainerDied","Data":"469ae18dd42598dba552dffdd5607faf35c16e63cd9d3d0f900d45cc0954f86f"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.385467 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerStarted","Data":"489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.385504 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerStarted","Data":"70b6d641abeabe7c2253d823ad71d899e1791cab27661eb48d3cc66649421cca"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.414270 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8017-account-create-update-t6d9m" podStartSLOduration=3.414248333 podStartE2EDuration="3.414248333s" podCreationTimestamp="2026-02-02 10:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.396894237 +0000 UTC m=+1104.281086953" watchObservedRunningTime="2026-02-02 10:57:04.414248333 +0000 UTC m=+1104.298441049" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.416541 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0e36-account-create-update-f5556" podStartSLOduration=2.416525418 podStartE2EDuration="2.416525418s" podCreationTimestamp="2026-02-02 10:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.415238122 +0000 UTC m=+1104.299430838" watchObservedRunningTime="2026-02-02 10:57:04.416525418 +0000 UTC m=+1104.300718134" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.442870 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-63a1-account-create-update-4kn5m" podStartSLOduration=2.442845802 podStartE2EDuration="2.442845802s" podCreationTimestamp="2026-02-02 10:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.441251166 +0000 UTC m=+1104.325443882" watchObservedRunningTime="2026-02-02 10:57:04.442845802 +0000 UTC m=+1104.327038518" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.575442 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sv8l5-config-h25fz" podStartSLOduration=3.575417845 podStartE2EDuration="3.575417845s" podCreationTimestamp="2026-02-02 10:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.54725831 +0000 UTC m=+1104.431451026" watchObservedRunningTime="2026-02-02 10:57:04.575417845 +0000 UTC m=+1104.459610561" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.626917 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-7dbcc" podStartSLOduration=3.626883148 podStartE2EDuration="3.626883148s" podCreationTimestamp="2026-02-02 10:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.575017764 +0000 UTC m=+1104.459210480" watchObservedRunningTime="2026-02-02 10:57:04.626883148 +0000 UTC m=+1104.511075874" Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.400262 4782 generic.go:334] "Generic (PLEG): container finished" podID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerID="642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.400365 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerDied","Data":"642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.403240 4782 generic.go:334] "Generic (PLEG): container finished" podID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerID="489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.403333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerDied","Data":"489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.405442 4782 generic.go:334] "Generic (PLEG): container finished" podID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerID="8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.405503 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerDied","Data":"8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.409100 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerID="f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.409174 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerDied","Data":"f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.411164 4782 generic.go:334] "Generic (PLEG): container finished" podID="e9cc96ce-182b-4231-a5e9-10197e083077" containerID="be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.411346 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerDied","Data":"be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01"} Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.260182 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.270821 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sv8l5" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.337554 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"68e5ac2b-72a8-46be-839a-fe639916a32e\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.337694 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"68e5ac2b-72a8-46be-839a-fe639916a32e\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.342440 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68e5ac2b-72a8-46be-839a-fe639916a32e" (UID: "68e5ac2b-72a8-46be-839a-fe639916a32e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.368927 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9" (OuterVolumeSpecName: "kube-api-access-bc7t9") pod "68e5ac2b-72a8-46be-839a-fe639916a32e" (UID: "68e5ac2b-72a8-46be-839a-fe639916a32e"). InnerVolumeSpecName "kube-api-access-bc7t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.440627 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.440963 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.456764 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xzm82" event={"ID":"b78c9d8b-0793-4e57-8a3d-ba7303f12d37","Type":"ContainerDied","Data":"96a9faaddad4cf88e7bfd92da474c6e44adac17cad51fedc68fc17de29153d22"} Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.456800 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a9faaddad4cf88e7bfd92da474c6e44adac17cad51fedc68fc17de29153d22" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.469100 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.471228 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q97pt" event={"ID":"68e5ac2b-72a8-46be-839a-fe639916a32e","Type":"ContainerDied","Data":"0a50f00bb9f097f6fa3bdc79dd91c54109f518e4daed13ca669b1c0b2aa64a56"} Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.471272 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a50f00bb9f097f6fa3bdc79dd91c54109f518e4daed13ca669b1c0b2aa64a56" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.480472 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.657379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.657437 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.658318 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b78c9d8b-0793-4e57-8a3d-ba7303f12d37" (UID: "b78c9d8b-0793-4e57-8a3d-ba7303f12d37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.673275 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8" (OuterVolumeSpecName: "kube-api-access-d45c8") pod "b78c9d8b-0793-4e57-8a3d-ba7303f12d37" (UID: "b78c9d8b-0793-4e57-8a3d-ba7303f12d37"). InnerVolumeSpecName "kube-api-access-d45c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.727975 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.759927 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"29024188-b374-45b7-ad85-b2d4ca88b485\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760036 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"29024188-b374-45b7-ad85-b2d4ca88b485\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760312 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760326 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760579 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29024188-b374-45b7-ad85-b2d4ca88b485" (UID: "29024188-b374-45b7-ad85-b2d4ca88b485"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.768417 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq" (OuterVolumeSpecName: "kube-api-access-jrhrq") pod "29024188-b374-45b7-ad85-b2d4ca88b485" (UID: "29024188-b374-45b7-ad85-b2d4ca88b485"). InnerVolumeSpecName "kube-api-access-jrhrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.861681 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.861717 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.903032 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.963285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.963414 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.963913 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" (UID: "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.969872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d" (OuterVolumeSpecName: "kube-api-access-mqd6d") pod "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" (UID: "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69"). InnerVolumeSpecName "kube-api-access-mqd6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.065546 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.065585 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.330192 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.340367 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.350988 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.361788 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376294 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376354 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"821635c8-3cf1-408b-8949-81dbc48b07b6\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376425 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376430 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376459 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376503 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"821635c8-3cf1-408b-8949-81dbc48b07b6\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376540 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376697 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376758 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"c3c77267-9133-440d-9f4e-536b2a021fdc\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376820 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"c3c77267-9133-440d-9f4e-536b2a021fdc\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376903 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376934 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376965 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376991 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377032 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53ddb047-8931-415b-8d0f-d0f73b72c8b3" (UID: "53ddb047-8931-415b-8d0f-d0f73b72c8b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377050 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "821635c8-3cf1-408b-8949-81dbc48b07b6" (UID: "821635c8-3cf1-408b-8949-81dbc48b07b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3c77267-9133-440d-9f4e-536b2a021fdc" (UID: "c3c77267-9133-440d-9f4e-536b2a021fdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377945 4782 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378192 4782 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378211 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378253 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378264 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378392 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378449 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run" (OuterVolumeSpecName: "var-run") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.380674 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2" (OuterVolumeSpecName: "kube-api-access-gfpw2") pod "c3c77267-9133-440d-9f4e-536b2a021fdc" (UID: "c3c77267-9133-440d-9f4e-536b2a021fdc"). InnerVolumeSpecName "kube-api-access-gfpw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.396234 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts" (OuterVolumeSpecName: "scripts") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.398878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5" (OuterVolumeSpecName: "kube-api-access-nxbg5") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "kube-api-access-nxbg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.416300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w" (OuterVolumeSpecName: "kube-api-access-f8w9w") pod "821635c8-3cf1-408b-8949-81dbc48b07b6" (UID: "821635c8-3cf1-408b-8949-81dbc48b07b6"). InnerVolumeSpecName "kube-api-access-f8w9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.416524 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll" (OuterVolumeSpecName: "kube-api-access-rtmll") pod "53ddb047-8931-415b-8d0f-d0f73b72c8b3" (UID: "53ddb047-8931-415b-8d0f-d0f73b72c8b3"). InnerVolumeSpecName "kube-api-access-rtmll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480679 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480716 4782 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480729 4782 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480740 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480750 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480760 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480770 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.489987 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6jdgj" event={"ID":"29024188-b374-45b7-ad85-b2d4ca88b485","Type":"ContainerDied","Data":"59931c74c238f44b10e52c4d20d13f519cae8b3cf2b301df562ef56ffaee122d"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.490046 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59931c74c238f44b10e52c4d20d13f519cae8b3cf2b301df562ef56ffaee122d" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.490127 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.508020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerDied","Data":"9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.508087 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.508194 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.517291 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerDied","Data":"cf9abdd02e5074feef030e5dd66f4f015eb1117f9a0e9921a5c0f13149d8cfb5"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.517318 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.517334 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9abdd02e5074feef030e5dd66f4f015eb1117f9a0e9921a5c0f13149d8cfb5" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.519356 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerDied","Data":"70b6d641abeabe7c2253d823ad71d899e1791cab27661eb48d3cc66649421cca"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.519400 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b6d641abeabe7c2253d823ad71d899e1791cab27661eb48d3cc66649421cca" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.519474 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.531083 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.531081 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerDied","Data":"03814b857f1ea9d519135baa096d1211b12fbe8c1aa5ce0a643e569e44214fa5"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.531195 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03814b857f1ea9d519135baa096d1211b12fbe8c1aa5ce0a643e569e44214fa5" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.539154 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.539782 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.543800 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerDied","Data":"4922ee8804a4e7e6bcbb57aadb57f32946333f0f7b3f9634c0cd775bf2cae365"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.543838 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4922ee8804a4e7e6bcbb57aadb57f32946333f0f7b3f9634c0cd775bf2cae365" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.592880 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.603773 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.715883 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.716807 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.716889 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.716966 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717020 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717079 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717164 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717221 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717270 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717331 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717392 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717457 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" containerName="ovn-config" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717533 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" containerName="ovn-config" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717619 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717708 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717790 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717876 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718123 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" containerName="ovn-config" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718209 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718294 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718631 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718727 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718804 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718891 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718973 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.719737 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.723241 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.734835 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800219 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800264 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800401 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800608 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800753 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.833836 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" path="/var/lib/kubelet/pods/e9cc96ce-182b-4231-a5e9-10197e083077/volumes" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901830 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901917 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901939 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901972 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901989 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.902037 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.902880 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.903142 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.903164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.903211 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.905545 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.924408 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:09 crc kubenswrapper[4782]: I0202 10:57:09.040247 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:11 crc kubenswrapper[4782]: I0202 10:57:11.971346 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.588799 4782 generic.go:334] "Generic (PLEG): container finished" podID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerID="602ab9da4d7f46c94dc61771e2c8b8b42a379bdff5c5bea8faa66082cc751118" exitCode=0 Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.588897 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-qwn7m" event={"ID":"099ffa73-778b-4dd4-acae-5efb663dfe17","Type":"ContainerDied","Data":"602ab9da4d7f46c94dc61771e2c8b8b42a379bdff5c5bea8faa66082cc751118"} Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.588931 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-qwn7m" event={"ID":"099ffa73-778b-4dd4-acae-5efb663dfe17","Type":"ContainerStarted","Data":"3bcb421a0ed0bb31e295fe54870b127532fe03089b480a8ab1e0677c96fadf81"} Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.590404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerStarted","Data":"a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c"} Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.660563 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-v4g2v" podStartSLOduration=2.747393455 podStartE2EDuration="10.660538394s" podCreationTimestamp="2026-02-02 10:57:02 +0000 UTC" firstStartedPulling="2026-02-02 10:57:03.790406442 +0000 UTC m=+1103.674599158" lastFinishedPulling="2026-02-02 10:57:11.703551381 +0000 UTC m=+1111.587744097" observedRunningTime="2026-02-02 10:57:12.653625666 +0000 UTC m=+1112.537818392" watchObservedRunningTime="2026-02-02 10:57:12.660538394 +0000 UTC m=+1112.544731110" Feb 02 10:57:13 crc kubenswrapper[4782]: I0202 10:57:13.920344 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087473 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087576 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087633 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run" (OuterVolumeSpecName: "var-run") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087698 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087790 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087831 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087866 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.088378 4782 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.088396 4782 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.089503 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.089954 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts" (OuterVolumeSpecName: "scripts") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.090269 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.103352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk" (OuterVolumeSpecName: "kube-api-access-pzdkk") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "kube-api-access-pzdkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190454 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190498 4782 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190511 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190526 4782 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.611962 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-qwn7m" event={"ID":"099ffa73-778b-4dd4-acae-5efb663dfe17","Type":"ContainerDied","Data":"3bcb421a0ed0bb31e295fe54870b127532fe03089b480a8ab1e0677c96fadf81"} Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.612028 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcb421a0ed0bb31e295fe54870b127532fe03089b480a8ab1e0677c96fadf81" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.612129 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.000952 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.009838 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.623365 4782 generic.go:334] "Generic (PLEG): container finished" podID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerID="a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c" exitCode=0 Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.623418 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerDied","Data":"a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c"} Feb 02 10:57:16 crc kubenswrapper[4782]: I0202 10:57:16.837710 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" path="/var/lib/kubelet/pods/099ffa73-778b-4dd4-acae-5efb663dfe17/volumes" Feb 02 10:57:16 crc kubenswrapper[4782]: I0202 10:57:16.891760 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.037961 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"843d8da2-ab8c-4938-be4b-aa67af531e1e\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.038563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"843d8da2-ab8c-4938-be4b-aa67af531e1e\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.038861 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"843d8da2-ab8c-4938-be4b-aa67af531e1e\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.044297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2" (OuterVolumeSpecName: "kube-api-access-pvls2") pod "843d8da2-ab8c-4938-be4b-aa67af531e1e" (UID: "843d8da2-ab8c-4938-be4b-aa67af531e1e"). InnerVolumeSpecName "kube-api-access-pvls2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.061844 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "843d8da2-ab8c-4938-be4b-aa67af531e1e" (UID: "843d8da2-ab8c-4938-be4b-aa67af531e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.087631 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data" (OuterVolumeSpecName: "config-data") pod "843d8da2-ab8c-4938-be4b-aa67af531e1e" (UID: "843d8da2-ab8c-4938-be4b-aa67af531e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.142203 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.142248 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.142261 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.646142 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerDied","Data":"e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce"} Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.646190 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.646237 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.937181 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:17 crc kubenswrapper[4782]: E0202 10:57:17.937900 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerName="keystone-db-sync" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.937918 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerName="keystone-db-sync" Feb 02 10:57:17 crc kubenswrapper[4782]: E0202 10:57:17.937952 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerName="ovn-config" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.937969 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerName="ovn-config" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.938335 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerName="ovn-config" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.938352 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerName="keystone-db-sync" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.939546 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.950274 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.951706 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.958842 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959367 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959496 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959585 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959376 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959730 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.997092 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059400 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059426 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059457 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059485 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059510 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059535 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059584 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059616 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059632 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161185 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161205 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161227 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161253 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161328 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161357 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161378 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.162253 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.162734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.162876 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.163717 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.174968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.176936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.186128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.187715 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.193869 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.254710 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.255920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.272019 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.275921 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.469464 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.473466 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.478158 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.478876 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tpp6m" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.510834 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.579864 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.594252 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.594376 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.594490 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.599765 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.601050 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.612162 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.612262 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.612364 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k6962" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.622049 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.644729 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.645697 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.649758 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.649998 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.650164 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l47mf" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.699294 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700173 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700286 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700310 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700345 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700363 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700388 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.701463 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.717630 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.718522 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.726898 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ntkkh" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.727072 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.727832 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.728818 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.731515 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.753143 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerStarted","Data":"266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7"} Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.776028 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.777826 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.791203 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.791385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.804534 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808808 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808852 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808919 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808937 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808977 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809012 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809067 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809088 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809106 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809158 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809248 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.822772 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.852195 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.852583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.856634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.875460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.918410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.953901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954319 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954349 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954443 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954480 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954506 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954536 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954631 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954807 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954874 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954936 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954964 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954996 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.955031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.964648 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.970534 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.971549 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.972439 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.988138 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.991276 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.006579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.007266 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.008607 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.019686 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.022807 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.036805 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066593 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066651 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066687 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066772 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066791 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066852 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066880 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066956 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.091045 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.091088 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.144013 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.144950 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.146406 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.153182 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.153717 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.177498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.178043 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.184762 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.185897 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.197401 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.198057 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.240910 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bwx58" podStartSLOduration=3.710937413 podStartE2EDuration="35.240888636s" podCreationTimestamp="2026-02-02 10:56:44 +0000 UTC" firstStartedPulling="2026-02-02 10:56:45.769292314 +0000 UTC m=+1085.653485030" lastFinishedPulling="2026-02-02 10:57:17.299243527 +0000 UTC m=+1117.183436253" observedRunningTime="2026-02-02 10:57:18.880086923 +0000 UTC m=+1118.764279639" watchObservedRunningTime="2026-02-02 10:57:19.240888636 +0000 UTC m=+1119.125081362" Feb 02 10:57:19 crc kubenswrapper[4782]: W0202 10:57:19.241071 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c688f3_2e65_48cc_8394_c3b87053d840.slice/crio-19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105 WatchSource:0}: Error finding container 19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105: Status 404 returned error can't find the container with id 19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105 Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.286707 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.366276 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.405032 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.419014 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.606056 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:19 crc kubenswrapper[4782]: W0202 10:57:19.618496 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc847df54_dabc_4a1c_a7dc_4d5c69b548fe.slice/crio-2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873 WatchSource:0}: Error finding container 2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873: Status 404 returned error can't find the container with id 2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873 Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.770141 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerStarted","Data":"2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873"} Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.780602 4782 generic.go:334] "Generic (PLEG): container finished" podID="08c688f3-2e65-48cc-8394-c3b87053d840" containerID="9fd628845002c75a7f47756de2c6e38ec28b97e5347a7244221f8b582f05c57b" exitCode=0 Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.780655 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" event={"ID":"08c688f3-2e65-48cc-8394-c3b87053d840","Type":"ContainerDied","Data":"9fd628845002c75a7f47756de2c6e38ec28b97e5347a7244221f8b582f05c57b"} Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.780818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" event={"ID":"08c688f3-2e65-48cc-8394-c3b87053d840","Type":"ContainerStarted","Data":"19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105"} Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.905642 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.004155 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.236790 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:20 crc kubenswrapper[4782]: W0202 10:57:20.249058 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbab971e_9d4a_4d47_b466_ec2110de7dfb.slice/crio-1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf WatchSource:0}: Error finding container 1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf: Status 404 returned error can't find the container with id 1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.522196 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622114 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622279 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622342 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.648661 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl" (OuterVolumeSpecName: "kube-api-access-v9tgl") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "kube-api-access-v9tgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: W0202 10:57:20.649234 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8943d8a_337b_4852_9c11_55191a08a850.slice/crio-4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194 WatchSource:0}: Error finding container 4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194: Status 404 returned error can't find the container with id 4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194 Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.658807 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.675073 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.678279 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.686151 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.715236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.724532 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config" (OuterVolumeSpecName: "config") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.724977 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.725014 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.725025 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.725036 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.741467 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.826556 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.910409 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.915883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerStarted","Data":"cf01f314448485ff21bcd2728c714dedb197b922c6d0f496ca141e9405a41bab"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.939850 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" event={"ID":"08c688f3-2e65-48cc-8394-c3b87053d840","Type":"ContainerDied","Data":"19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.939901 4782 scope.go:117] "RemoveContainer" containerID="9fd628845002c75a7f47756de2c6e38ec28b97e5347a7244221f8b582f05c57b" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.940289 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.949337 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"995e3d21dc8fc39f728f7ea640cf5b2814a34afafd0aba1f79572a1482443e61"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.968459 4782 generic.go:334] "Generic (PLEG): container finished" podID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerID="d612f10c6156f6cb4afac9aec45e071dc15d31ca60fe0b15bf367f1991040e4f" exitCode=0 Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.968594 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerDied","Data":"d612f10c6156f6cb4afac9aec45e071dc15d31ca60fe0b15bf367f1991040e4f"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.968624 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerStarted","Data":"1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.993274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerStarted","Data":"2b88f70f23d6438ad4880535e90d03f7ddcd1e6596512bcb845cbde82cf71a29"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.010720 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerStarted","Data":"84c341193c47fc4aa8a47eed674765c2cf34eb70060671ad9bf767eb2f34ee7a"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.053881 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerStarted","Data":"4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.075191 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerStarted","Data":"d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.249445 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.252875 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.274179 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wv5hq" podStartSLOduration=4.274158842 podStartE2EDuration="4.274158842s" podCreationTimestamp="2026-02-02 10:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:21.25317017 +0000 UTC m=+1121.137362896" watchObservedRunningTime="2026-02-02 10:57:21.274158842 +0000 UTC m=+1121.158351548" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.096975 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerStarted","Data":"8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75"} Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.097961 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.100794 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerStarted","Data":"bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5"} Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.144591 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" podStartSLOduration=4.144575473 podStartE2EDuration="4.144575473s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:22.144184232 +0000 UTC m=+1122.028376948" watchObservedRunningTime="2026-02-02 10:57:22.144575473 +0000 UTC m=+1122.028768189" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.168927 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ztmll" podStartSLOduration=4.16890975 podStartE2EDuration="4.16890975s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:22.165868353 +0000 UTC m=+1122.050061069" watchObservedRunningTime="2026-02-02 10:57:22.16890975 +0000 UTC m=+1122.053102456" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.833957 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" path="/var/lib/kubelet/pods/08c688f3-2e65-48cc-8394-c3b87053d840/volumes" Feb 02 10:57:27 crc kubenswrapper[4782]: I0202 10:57:27.186441 4782 generic.go:334] "Generic (PLEG): container finished" podID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerID="d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489" exitCode=0 Feb 02 10:57:27 crc kubenswrapper[4782]: I0202 10:57:27.186592 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerDied","Data":"d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489"} Feb 02 10:57:29 crc kubenswrapper[4782]: I0202 10:57:29.147933 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:29 crc kubenswrapper[4782]: I0202 10:57:29.217981 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:57:29 crc kubenswrapper[4782]: I0202 10:57:29.218223 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" containerID="cri-o://79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a" gracePeriod=10 Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.251000 4782 generic.go:334] "Generic (PLEG): container finished" podID="ede109fe-b194-4a02-992d-f1132849fc0d" containerID="79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a" exitCode=0 Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.251079 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerDied","Data":"79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a"} Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.804164 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944387 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944506 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944589 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944985 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.945086 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.945171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.950392 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts" (OuterVolumeSpecName: "scripts") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.954618 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.956912 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd" (OuterVolumeSpecName: "kube-api-access-29bhd") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "kube-api-access-29bhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.972437 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.974879 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data" (OuterVolumeSpecName: "config-data") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.977206 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.047750 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048074 4782 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048085 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048096 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048105 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048114 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.262691 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerDied","Data":"2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873"} Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.262736 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.262790 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.886841 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.892590 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.987592 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 10:57:31 crc kubenswrapper[4782]: E0202 10:57:31.988090 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerName="keystone-bootstrap" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988114 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerName="keystone-bootstrap" Feb 02 10:57:31 crc kubenswrapper[4782]: E0202 10:57:31.988126 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" containerName="init" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988133 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" containerName="init" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988318 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" containerName="init" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988347 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerName="keystone-bootstrap" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.989138 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.991447 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.991668 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.991870 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.993926 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.994091 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.002474 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.163739 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.163835 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.163862 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.164404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.164480 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.164531 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267251 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267736 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267875 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267891 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.275227 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.275601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.283414 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.283789 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.288435 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.296374 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.315510 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.833771 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" path="/var/lib/kubelet/pods/c847df54-dabc-4a1c-a7dc-4d5c69b548fe/volumes" Feb 02 10:57:33 crc kubenswrapper[4782]: I0202 10:57:33.469946 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Feb 02 10:57:37 crc kubenswrapper[4782]: I0202 10:57:37.332097 4782 generic.go:334] "Generic (PLEG): container finished" podID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerID="266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7" exitCode=0 Feb 02 10:57:37 crc kubenswrapper[4782]: I0202 10:57:37.332168 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerDied","Data":"266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7"} Feb 02 10:57:42 crc kubenswrapper[4782]: E0202 10:57:42.621674 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 10:57:42 crc kubenswrapper[4782]: E0202 10:57:42.622249 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-qjtml_openstack(14e3fab7-be93-409c-a88e-85c8d0ca533c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:57:42 crc kubenswrapper[4782]: E0202 10:57:42.623403 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-qjtml" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.723617 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.731293 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.887546 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.887740 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.887850 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888729 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888901 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888930 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.889047 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.893908 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.894033 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb" (OuterVolumeSpecName: "kube-api-access-7xmdb") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "kube-api-access-7xmdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.895436 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn" (OuterVolumeSpecName: "kube-api-access-wkjmn") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "kube-api-access-wkjmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.914338 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.943902 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.946461 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config" (OuterVolumeSpecName: "config") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.956316 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.962117 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data" (OuterVolumeSpecName: "config-data") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.969705 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991369 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991723 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991824 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991915 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991995 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992070 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992153 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992246 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992331 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.384288 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerDied","Data":"4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54"} Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.384336 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.384478 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.387866 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerDied","Data":"70e38c18e7eeadf50540fc012954a93846a0fd6be83565a64c6ee300da9f11db"} Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.387915 4782 scope.go:117] "RemoveContainer" containerID="79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.388020 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:57:43 crc kubenswrapper[4782]: E0202 10:57:43.389808 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-qjtml" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.438389 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.445448 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.470174 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.317712 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:44 crc kubenswrapper[4782]: E0202 10:57:44.318150 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="init" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318165 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="init" Feb 02 10:57:44 crc kubenswrapper[4782]: E0202 10:57:44.318175 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318183 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" Feb 02 10:57:44 crc kubenswrapper[4782]: E0202 10:57:44.318200 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerName="glance-db-sync" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318209 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerName="glance-db-sync" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318407 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318428 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerName="glance-db-sync" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.320031 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.372913 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422295 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422326 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422386 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422450 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.523968 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524082 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524101 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524146 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524921 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.525151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.525163 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.525589 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.548887 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.666268 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.830411 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" path="/var/lib/kubelet/pods/ede109fe-b194-4a02-992d-f1132849fc0d/volumes" Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.125319 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.125496 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpd9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rvrqj_openstack(bf4fe919-15fe-4478-be0f-8e3bf00147b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.126774 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rvrqj" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.151093 4782 scope.go:117] "RemoveContainer" containerID="ab258d10ba96d70b4cfb3ed122e2e498a9dd3427d5d89bbbbf656b5efa7359c1" Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.451719 4782 generic.go:334] "Generic (PLEG): container finished" podID="f8943d8a-337b-4852-9c11-55191a08a850" containerID="bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5" exitCode=0 Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.451777 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerDied","Data":"bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5"} Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.453790 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rvrqj" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" Feb 02 10:57:45 crc kubenswrapper[4782]: W0202 10:57:45.655298 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45d6513_2de0_4ece_bbbc_26c6780cd145.slice/crio-f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb WatchSource:0}: Error finding container f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb: Status 404 returned error can't find the container with id f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.661039 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.663448 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.757798 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.465928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.472866 4782 generic.go:334] "Generic (PLEG): container finished" podID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" exitCode=0 Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.473003 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerDied","Data":"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.473075 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerStarted","Data":"4d3202753bbc7ad4f1069d7c505ccba805f85fd5770fe99dd65abc48e1c13646"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.476290 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerStarted","Data":"f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.476362 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerStarted","Data":"f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.479319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerStarted","Data":"882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.537398 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t58qc" podStartSLOduration=15.537380106 podStartE2EDuration="15.537380106s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:46.533796244 +0000 UTC m=+1146.417988980" watchObservedRunningTime="2026-02-02 10:57:46.537380106 +0000 UTC m=+1146.421572822" Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.551214 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9zhdd" podStartSLOduration=3.426953622 podStartE2EDuration="28.551195412s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:20.000443449 +0000 UTC m=+1119.884636175" lastFinishedPulling="2026-02-02 10:57:45.124685249 +0000 UTC m=+1145.008877965" observedRunningTime="2026-02-02 10:57:46.547677192 +0000 UTC m=+1146.431869918" watchObservedRunningTime="2026-02-02 10:57:46.551195412 +0000 UTC m=+1146.435388128" Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.950585 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.077658 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"f8943d8a-337b-4852-9c11-55191a08a850\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.077763 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"f8943d8a-337b-4852-9c11-55191a08a850\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.077882 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"f8943d8a-337b-4852-9c11-55191a08a850\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.081563 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw" (OuterVolumeSpecName: "kube-api-access-frrfw") pod "f8943d8a-337b-4852-9c11-55191a08a850" (UID: "f8943d8a-337b-4852-9c11-55191a08a850"). InnerVolumeSpecName "kube-api-access-frrfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.104335 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8943d8a-337b-4852-9c11-55191a08a850" (UID: "f8943d8a-337b-4852-9c11-55191a08a850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.107109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config" (OuterVolumeSpecName: "config") pod "f8943d8a-337b-4852-9c11-55191a08a850" (UID: "f8943d8a-337b-4852-9c11-55191a08a850"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.180243 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.180285 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.180296 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.490507 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9"} Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.496792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerStarted","Data":"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf"} Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.497383 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.499852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerDied","Data":"4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194"} Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.499908 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.500094 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.543885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" podStartSLOduration=3.543864417 podStartE2EDuration="3.543864417s" podCreationTimestamp="2026-02-02 10:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:47.528367353 +0000 UTC m=+1147.412560089" watchObservedRunningTime="2026-02-02 10:57:47.543864417 +0000 UTC m=+1147.428057133" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.718719 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.775757 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:57:47 crc kubenswrapper[4782]: E0202 10:57:47.776093 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8943d8a-337b-4852-9c11-55191a08a850" containerName="neutron-db-sync" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.776106 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8943d8a-337b-4852-9c11-55191a08a850" containerName="neutron-db-sync" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.776260 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8943d8a-337b-4852-9c11-55191a08a850" containerName="neutron-db-sync" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.787164 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.825656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.898778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.898870 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.898903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.899020 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.899046 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.925228 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.927565 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936315 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ntkkh" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936675 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936832 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936978 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.979011 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.001439 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.001497 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.002457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.002544 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003342 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003925 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003975 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004054 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004157 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004219 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.005154 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.010383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.049688 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.105898 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.105978 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.106004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.106041 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.106067 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.113028 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.113819 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.128299 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.128893 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.130580 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.136877 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.267236 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.948175 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.276075 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:57:49 crc kubenswrapper[4782]: W0202 10:57:49.282559 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64a58e87_7403_40ee_804f_3ddd256a166a.slice/crio-320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf WatchSource:0}: Error finding container 320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf: Status 404 returned error can't find the container with id 320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.530119 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerID="307a5c49cf6e90bbab2ae7599314e3e57ac09662374b82e043747eec646d2bdd" exitCode=0 Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.530169 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerDied","Data":"307a5c49cf6e90bbab2ae7599314e3e57ac09662374b82e043747eec646d2bdd"} Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.530212 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerStarted","Data":"d2d57fff99a40c3d971a276c962ad40364a2dc18610c2d3bd9d74bd06dd02f62"} Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.544293 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerStarted","Data":"320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf"} Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.560556 4782 generic.go:334] "Generic (PLEG): container finished" podID="173458b2-9a63-4456-9bc9-698d1414a679" containerID="882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549" exitCode=0 Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.560757 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" containerID="cri-o://49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" gracePeriod=10 Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.560971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerDied","Data":"882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.038591 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068527 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068620 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068668 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068710 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.081484 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz" (OuterVolumeSpecName: "kube-api-access-ch7gz") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "kube-api-access-ch7gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.174527 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.189108 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.192914 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config" (OuterVolumeSpecName: "config") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.196518 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.197447 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.276996 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.277031 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.277048 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.277057 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.466793 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.467160 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="init" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.467186 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="init" Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.467211 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.467219 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.471497 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.472402 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.475305 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.475462 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.479962 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480352 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480427 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480572 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480691 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.485425 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.580783 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerStarted","Data":"57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581118 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581829 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.582019 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.582050 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.582084 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.590037 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.590798 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerStarted","Data":"0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.590831 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerStarted","Data":"d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.591253 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.591387 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.592057 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.598817 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.603706 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.609712 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.625195 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" podStartSLOduration=3.625177067 podStartE2EDuration="3.625177067s" podCreationTimestamp="2026-02-02 10:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:50.62388586 +0000 UTC m=+1150.508078586" watchObservedRunningTime="2026-02-02 10:57:50.625177067 +0000 UTC m=+1150.509369773" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.628981 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.640853 4782 generic.go:334] "Generic (PLEG): container finished" podID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" exitCode=0 Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641017 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641074 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerDied","Data":"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641106 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerDied","Data":"4d3202753bbc7ad4f1069d7c505ccba805f85fd5770fe99dd65abc48e1c13646"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641129 4782 scope.go:117] "RemoveContainer" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.727732 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c4497f454-mphzd" podStartSLOduration=3.727709506 podStartE2EDuration="3.727709506s" podCreationTimestamp="2026-02-02 10:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:50.65319539 +0000 UTC m=+1150.537388106" watchObservedRunningTime="2026-02-02 10:57:50.727709506 +0000 UTC m=+1150.611902222" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.758553 4782 scope.go:117] "RemoveContainer" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.770239 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.779557 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.793469 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.828786 4782 scope.go:117] "RemoveContainer" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.830267 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf\": container with ID starting with 49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf not found: ID does not exist" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.830329 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf"} err="failed to get container status \"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf\": rpc error: code = NotFound desc = could not find container \"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf\": container with ID starting with 49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf not found: ID does not exist" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.830354 4782 scope.go:117] "RemoveContainer" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.847456 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27\": container with ID starting with 8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27 not found: ID does not exist" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.847518 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27"} err="failed to get container status \"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27\": rpc error: code = NotFound desc = could not find container \"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27\": container with ID starting with 8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27 not found: ID does not exist" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.896279 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" path="/var/lib/kubelet/pods/3a78ac20-6473-4217-aa2d-3d2b4f03023b/volumes" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.145448 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316140 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316191 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316313 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316387 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316421 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.318918 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs" (OuterVolumeSpecName: "logs") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.328844 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc" (OuterVolumeSpecName: "kube-api-access-56spc") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "kube-api-access-56spc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.330910 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts" (OuterVolumeSpecName: "scripts") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.358158 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.374929 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data" (OuterVolumeSpecName: "config-data") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419429 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419456 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419466 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419477 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419486 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.662565 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerDied","Data":"84c341193c47fc4aa8a47eed674765c2cf34eb70060671ad9bf767eb2f34ee7a"} Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.662889 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c341193c47fc4aa8a47eed674765c2cf34eb70060671ad9bf767eb2f34ee7a" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.662956 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.695897 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:57:51 crc kubenswrapper[4782]: E0202 10:57:51.696228 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173458b2-9a63-4456-9bc9-698d1414a679" containerName="placement-db-sync" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.696243 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="173458b2-9a63-4456-9bc9-698d1414a679" containerName="placement-db-sync" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.696465 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="173458b2-9a63-4456-9bc9-698d1414a679" containerName="placement-db-sync" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.697941 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.704554 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.705317 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.705491 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.705677 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.706957 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k6962" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.728008 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834215 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834321 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834402 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834442 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834497 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936231 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936329 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936440 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936487 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.937854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.943539 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.945208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.945791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.946444 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.955341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.957796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.037287 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.138975 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.671976 4782 generic.go:334] "Generic (PLEG): container finished" podID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerID="f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254" exitCode=0 Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.672059 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerDied","Data":"f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254"} Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.695122 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerDied","Data":"f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb"} Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.695635 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb" Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.723820 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerStarted","Data":"8889fd515dbda34f10b34a65e145848adfe2d17e55c2e3acb24297eefee67df3"} Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.808069 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991210 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991651 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991687 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991740 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991782 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991809 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.995780 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.997301 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts" (OuterVolumeSpecName: "scripts") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.006761 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.011961 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh" (OuterVolumeSpecName: "kube-api-access-464nh") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "kube-api-access-464nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.022336 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data" (OuterVolumeSpecName: "config-data") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.028415 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.093978 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094013 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094023 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094032 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094041 4782 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094048 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.124307 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:57:55 crc kubenswrapper[4782]: W0202 10:57:55.125608 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060c1eb2_7773_4122_8725_bf421f0feaac.slice/crio-cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7 WatchSource:0}: Error finding container cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7: Status 404 returned error can't find the container with id cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7 Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.735183 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.737830 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerStarted","Data":"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.737868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerStarted","Data":"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.737883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerStarted","Data":"cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.738203 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.738229 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.739965 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.744012 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerStarted","Data":"f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.744049 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerStarted","Data":"786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.744179 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.763194 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-54577c875b-pcjgd" podStartSLOduration=4.763175421 podStartE2EDuration="4.763175421s" podCreationTimestamp="2026-02-02 10:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:55.763095099 +0000 UTC m=+1155.647287835" watchObservedRunningTime="2026-02-02 10:57:55.763175421 +0000 UTC m=+1155.647368137" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.802369 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-644b87c8cc-7cfbr" podStartSLOduration=5.802348224 podStartE2EDuration="5.802348224s" podCreationTimestamp="2026-02-02 10:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:55.791807422 +0000 UTC m=+1155.676000138" watchObservedRunningTime="2026-02-02 10:57:55.802348224 +0000 UTC m=+1155.686540950" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.987198 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79d66b847-whsks"] Feb 02 10:57:55 crc kubenswrapper[4782]: E0202 10:57:55.987876 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerName="keystone-bootstrap" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.987977 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerName="keystone-bootstrap" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.988180 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerName="keystone-bootstrap" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.988743 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002088 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002235 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002333 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002590 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002850 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002977 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.084691 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d66b847-whsks"] Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119539 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-scripts\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119873 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-credential-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119891 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-internal-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-config-data\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120009 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-fernet-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120030 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5sf\" (UniqueName: \"kubernetes.io/projected/df4aa6a3-22bf-459c-becf-3685a170ae22-kube-api-access-wl5sf\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120050 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-public-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-combined-ca-bundle\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236045 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-config-data\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-fernet-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236292 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-public-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236386 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl5sf\" (UniqueName: \"kubernetes.io/projected/df4aa6a3-22bf-459c-becf-3685a170ae22-kube-api-access-wl5sf\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-combined-ca-bundle\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236878 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-scripts\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.237004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-credential-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.237051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-internal-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.247339 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-combined-ca-bundle\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.248292 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-config-data\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.248947 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-fernet-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.250092 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-public-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.255890 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-scripts\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.255894 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-credential-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.256164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-internal-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.259088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl5sf\" (UniqueName: \"kubernetes.io/projected/df4aa6a3-22bf-459c-becf-3685a170ae22-kube-api-access-wl5sf\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.318208 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.459003 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d66b847-whsks"] Feb 02 10:57:57 crc kubenswrapper[4782]: W0202 10:57:57.463310 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4aa6a3_22bf_459c_becf_3685a170ae22.slice/crio-e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b WatchSource:0}: Error finding container e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b: Status 404 returned error can't find the container with id e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.765729 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d66b847-whsks" event={"ID":"df4aa6a3-22bf-459c-becf-3685a170ae22","Type":"ContainerStarted","Data":"56a1f84f5103d341659155875850f80e7d181fa0691ff0a747d748709cb782f0"} Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.765784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d66b847-whsks" event={"ID":"df4aa6a3-22bf-459c-becf-3685a170ae22","Type":"ContainerStarted","Data":"e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b"} Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.766058 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.808286 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79d66b847-whsks" podStartSLOduration=2.808260476 podStartE2EDuration="2.808260476s" podCreationTimestamp="2026-02-02 10:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:57.798196958 +0000 UTC m=+1157.682389694" watchObservedRunningTime="2026-02-02 10:57:57.808260476 +0000 UTC m=+1157.692453202" Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.116074 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.197019 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.197352 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" containerID="cri-o://8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75" gracePeriod=10 Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.782787 4782 generic.go:334] "Generic (PLEG): container finished" podID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerID="8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75" exitCode=0 Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.782868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerDied","Data":"8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75"} Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.020209 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118665 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118708 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118808 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.124083 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p" (OuterVolumeSpecName: "kube-api-access-t9t5p") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "kube-api-access-t9t5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.196534 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.206249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.211057 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221066 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221111 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221128 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221180 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.292015 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config" (OuterVolumeSpecName: "config") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.322839 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.797895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerDied","Data":"1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf"} Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.797961 4782 scope.go:117] "RemoveContainer" containerID="8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.798087 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.804350 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerStarted","Data":"86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d"} Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.837102 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qjtml" podStartSLOduration=2.290081222 podStartE2EDuration="41.837079724s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:19.958810526 +0000 UTC m=+1119.843003242" lastFinishedPulling="2026-02-02 10:57:59.505809028 +0000 UTC m=+1159.390001744" observedRunningTime="2026-02-02 10:57:59.816678759 +0000 UTC m=+1159.700871485" watchObservedRunningTime="2026-02-02 10:57:59.837079724 +0000 UTC m=+1159.721272460" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.846956 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.853885 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.857693 4782 scope.go:117] "RemoveContainer" containerID="d612f10c6156f6cb4afac9aec45e071dc15d31ca60fe0b15bf367f1991040e4f" Feb 02 10:58:00 crc kubenswrapper[4782]: I0202 10:58:00.833858 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" path="/var/lib/kubelet/pods/bbab971e-9d4a-4d47-b466-ec2110de7dfb/volumes" Feb 02 10:58:00 crc kubenswrapper[4782]: I0202 10:58:00.834413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerStarted","Data":"e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4"} Feb 02 10:58:00 crc kubenswrapper[4782]: I0202 10:58:00.863045 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rvrqj" podStartSLOduration=4.099556453 podStartE2EDuration="42.863022124s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:20.740264177 +0000 UTC m=+1120.624456893" lastFinishedPulling="2026-02-02 10:57:59.503729838 +0000 UTC m=+1159.387922564" observedRunningTime="2026-02-02 10:58:00.85520343 +0000 UTC m=+1160.739396146" watchObservedRunningTime="2026-02-02 10:58:00.863022124 +0000 UTC m=+1160.747214840" Feb 02 10:58:03 crc kubenswrapper[4782]: I0202 10:58:03.851772 4782 generic.go:334] "Generic (PLEG): container finished" podID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerID="86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d" exitCode=0 Feb 02 10:58:03 crc kubenswrapper[4782]: I0202 10:58:03.851825 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerDied","Data":"86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d"} Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.408072 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.555107 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"14e3fab7-be93-409c-a88e-85c8d0ca533c\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.555699 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"14e3fab7-be93-409c-a88e-85c8d0ca533c\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.555730 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"14e3fab7-be93-409c-a88e-85c8d0ca533c\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.559220 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "14e3fab7-be93-409c-a88e-85c8d0ca533c" (UID: "14e3fab7-be93-409c-a88e-85c8d0ca533c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.559374 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7" (OuterVolumeSpecName: "kube-api-access-jbbw7") pod "14e3fab7-be93-409c-a88e-85c8d0ca533c" (UID: "14e3fab7-be93-409c-a88e-85c8d0ca533c"). InnerVolumeSpecName "kube-api-access-jbbw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.578025 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14e3fab7-be93-409c-a88e-85c8d0ca533c" (UID: "14e3fab7-be93-409c-a88e-85c8d0ca533c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.658093 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.658139 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.658149 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.881150 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerDied","Data":"cf01f314448485ff21bcd2728c714dedb197b922c6d0f496ca141e9405a41bab"} Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.881227 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf01f314448485ff21bcd2728c714dedb197b922c6d0f496ca141e9405a41bab" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.881255 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.885995 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96"} Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886232 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" containerID="cri-o://140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886255 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" containerID="cri-o://204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886247 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" containerID="cri-o://cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886240 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886280 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" containerID="cri-o://64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.927206 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.157918549 podStartE2EDuration="48.926992622s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:20.701943799 +0000 UTC m=+1120.586136515" lastFinishedPulling="2026-02-02 10:58:06.471017872 +0000 UTC m=+1166.355210588" observedRunningTime="2026-02-02 10:58:06.924336346 +0000 UTC m=+1166.808529062" watchObservedRunningTime="2026-02-02 10:58:06.926992622 +0000 UTC m=+1166.811185338" Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.226406 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb720ee_de8d_42e4_b189_aa3d58478ab9.slice/crio-conmon-cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729164 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b54d776c6-xrdvf"] Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.729836 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerName="barbican-db-sync" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729849 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerName="barbican-db-sync" Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.729870 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729878 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.729890 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="init" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729896 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="init" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.730060 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerName="barbican-db-sync" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.730080 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.730951 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.740506 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tpp6m" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.740748 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.740989 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.753535 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5bbfd966d5-c6jc5"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.755166 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.757894 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777750 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-logs\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777806 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2f7g\" (UniqueName: \"kubernetes.io/projected/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-kube-api-access-r2f7g\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-combined-ca-bundle\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777937 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data-custom\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.790397 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b54d776c6-xrdvf"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.817923 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bbfd966d5-c6jc5"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.877713 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.878995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data-custom\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879044 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-logs\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879138 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-combined-ca-bundle\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879173 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658rt\" (UniqueName: \"kubernetes.io/projected/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-kube-api-access-658rt\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879204 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879243 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-logs\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879296 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2f7g\" (UniqueName: \"kubernetes.io/projected/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-kube-api-access-r2f7g\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879369 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data-custom\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-combined-ca-bundle\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.886377 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data-custom\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.888198 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-logs\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.893070 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.897116 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.901520 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-combined-ca-bundle\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.917287 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2f7g\" (UniqueName: \"kubernetes.io/projected/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-kube-api-access-r2f7g\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.917653 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f"} Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.919817 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" exitCode=2 Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.919851 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" exitCode=0 Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.919883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08"} Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.982681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-combined-ca-bundle\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983017 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658rt\" (UniqueName: \"kubernetes.io/projected/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-kube-api-access-658rt\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-logs\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983358 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983445 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983507 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983580 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data-custom\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983768 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983913 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.985099 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-logs\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.991517 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-combined-ca-bundle\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.996105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.010379 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data-custom\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.020489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658rt\" (UniqueName: \"kubernetes.io/projected/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-kube-api-access-658rt\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.029891 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.031163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.035617 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.053079 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.073398 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.084000 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.085886 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.085937 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.085980 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086015 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086045 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086114 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086143 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.094629 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.094706 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.094896 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.098498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.128385 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187665 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187774 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187810 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187831 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187852 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.189413 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.195948 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.197755 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.199249 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.221230 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.284920 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.424584 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.930459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b54d776c6-xrdvf"] Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.935679 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" exitCode=0 Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.935727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9"} Feb 02 10:58:08 crc kubenswrapper[4782]: W0202 10:58:08.944630 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea0f5849_bbf6_4184_8b8c_8e11cd8da661.slice/crio-4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985 WatchSource:0}: Error finding container 4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985: Status 404 returned error can't find the container with id 4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985 Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.980504 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bbfd966d5-c6jc5"] Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.184722 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.283091 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.955507 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerStarted","Data":"de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.956047 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerStarted","Data":"985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.956075 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.956088 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerStarted","Data":"486044541d5c070266883b9d8c5a598bb41438b6bc2f68afedb4cd643ff3c9ee"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.958232 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" event={"ID":"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f","Type":"ContainerStarted","Data":"5c3401638f23752d95df9b5b67a3bf8e7509f28b511cb239856400db8f006025"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.962617 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" event={"ID":"ea0f5849-bbf6-4184-8b8c-8e11cd8da661","Type":"ContainerStarted","Data":"4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.965874 4782 generic.go:334] "Generic (PLEG): container finished" podID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerID="e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4" exitCode=0 Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.966035 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerDied","Data":"e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.967402 4782 generic.go:334] "Generic (PLEG): container finished" podID="b226dd37-b5b5-4514-9495-944db6e760ed" containerID="fd4e9ce89e9962fa17cc57f028ec76e94e318eabc07d088e7fecd0989f0c912c" exitCode=0 Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.967447 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerDied","Data":"fd4e9ce89e9962fa17cc57f028ec76e94e318eabc07d088e7fecd0989f0c912c"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.967468 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerStarted","Data":"42049f171b77a573283352f44491fb48841eb34d9cc4039ea25a8c1b150ccf44"} Feb 02 10:58:10 crc kubenswrapper[4782]: I0202 10:58:10.001736 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b7797d578-tmg69" podStartSLOduration=3.001716502 podStartE2EDuration="3.001716502s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:09.983028846 +0000 UTC m=+1169.867221552" watchObservedRunningTime="2026-02-02 10:58:10.001716502 +0000 UTC m=+1169.885909218" Feb 02 10:58:10 crc kubenswrapper[4782]: I0202 10:58:10.982138 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.101127 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77c4d8f8d8-7qmjv"] Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.103384 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.106306 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.107546 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.144554 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c4d8f8d8-7qmjv"] Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175711 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-public-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data-custom\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nrz\" (UniqueName: \"kubernetes.io/projected/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-kube-api-access-98nrz\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175843 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-internal-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175870 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-combined-ca-bundle\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175889 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-logs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-internal-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277750 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-combined-ca-bundle\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277775 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-logs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-public-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277873 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data-custom\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.278240 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98nrz\" (UniqueName: \"kubernetes.io/projected/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-kube-api-access-98nrz\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.290674 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-logs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.300564 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-combined-ca-bundle\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.320950 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98nrz\" (UniqueName: \"kubernetes.io/projected/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-kube-api-access-98nrz\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.321789 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-public-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.322718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data-custom\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.323714 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.339829 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-internal-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.534542 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.727658 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802744 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802880 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802990 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.803010 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.803097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.804309 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.807740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.809230 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r" (OuterVolumeSpecName: "kube-api-access-cpd9r") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "kube-api-access-cpd9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.810800 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts" (OuterVolumeSpecName: "scripts") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.853589 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.884892 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data" (OuterVolumeSpecName: "config-data") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905836 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905878 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905891 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905902 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905943 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905957 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.992892 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" event={"ID":"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f","Type":"ContainerStarted","Data":"c4b7a9fca7bbe65ad62776dcc2946b55ab5068890b3eed0e7ba1c4e7cd70b780"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.992931 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" event={"ID":"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f","Type":"ContainerStarted","Data":"691965a28ee6d33d5f274ea50fd745ad1d0d692a3a18694fed1901bca1389b85"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.994720 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" event={"ID":"ea0f5849-bbf6-4184-8b8c-8e11cd8da661","Type":"ContainerStarted","Data":"9ffe45871c20724a52164e96ffc54f08b31fbb9784845d325e9355ca338db887"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.994757 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" event={"ID":"ea0f5849-bbf6-4184-8b8c-8e11cd8da661","Type":"ContainerStarted","Data":"ded6bfd32cc21311a5b7d2538d7c1590f1501ff96c1a6faf86e998fabc099321"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.998219 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerDied","Data":"2b88f70f23d6438ad4880535e90d03f7ddcd1e6596512bcb845cbde82cf71a29"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.998240 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b88f70f23d6438ad4880535e90d03f7ddcd1e6596512bcb845cbde82cf71a29" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.998253 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.001805 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerStarted","Data":"69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001"} Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.001851 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.014422 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" podStartSLOduration=3.078363129 podStartE2EDuration="5.014405148s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="2026-02-02 10:58:09.000208383 +0000 UTC m=+1168.884401109" lastFinishedPulling="2026-02-02 10:58:10.936250412 +0000 UTC m=+1170.820443128" observedRunningTime="2026-02-02 10:58:12.007516871 +0000 UTC m=+1171.891709587" watchObservedRunningTime="2026-02-02 10:58:12.014405148 +0000 UTC m=+1171.898597864" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.037359 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" podStartSLOduration=5.037339056 podStartE2EDuration="5.037339056s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:12.024154348 +0000 UTC m=+1171.908347074" watchObservedRunningTime="2026-02-02 10:58:12.037339056 +0000 UTC m=+1171.921531772" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.060885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" podStartSLOduration=3.071554824 podStartE2EDuration="5.06086911s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="2026-02-02 10:58:08.949082057 +0000 UTC m=+1168.833274773" lastFinishedPulling="2026-02-02 10:58:10.938396343 +0000 UTC m=+1170.822589059" observedRunningTime="2026-02-02 10:58:12.054765635 +0000 UTC m=+1171.938958351" watchObservedRunningTime="2026-02-02 10:58:12.06086911 +0000 UTC m=+1171.945061826" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.098579 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c4d8f8d8-7qmjv"] Feb 02 10:58:12 crc kubenswrapper[4782]: W0202 10:58:12.109615 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b9ad9f_f95d_4839_9531_4f0f11ca86ff.slice/crio-faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae WatchSource:0}: Error finding container faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae: Status 404 returned error can't find the container with id faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.404777 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: E0202 10:58:12.405378 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerName="cinder-db-sync" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.405393 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerName="cinder-db-sync" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.405563 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerName="cinder-db-sync" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.406452 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.411407 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.411770 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.414514 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.415758 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l47mf" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.415814 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515683 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515761 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515809 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515823 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.547634 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620544 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620597 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620614 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620715 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.621001 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.630916 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.638350 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.648395 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.657043 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.668699 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.670484 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.694691 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.703193 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.725882 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.725939 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.725965 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.726298 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.726423 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.733270 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.832606 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.839512 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.840103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.840354 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.840651 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.841862 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.844761 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.845337 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.850258 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.868339 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.870197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.874465 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.877259 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.893182 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942727 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942835 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942884 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942965 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942989 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.943028 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.943055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.002842 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.024039 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" event={"ID":"52b9ad9f-f95d-4839-9531-4f0f11ca86ff","Type":"ContainerStarted","Data":"f288acbe921357fa3278c93e72e64296c1366f45e89f011ffdacc77b05974ad4"} Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.024087 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" event={"ID":"52b9ad9f-f95d-4839-9531-4f0f11ca86ff","Type":"ContainerStarted","Data":"7b49f80f062f7887bfc3b6ca104586f8651572648e02a453ea0aa9c52e2f1126"} Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.024100 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" event={"ID":"52b9ad9f-f95d-4839-9531-4f0f11ca86ff","Type":"ContainerStarted","Data":"faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae"} Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058123 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058631 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058785 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.059019 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.059051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.063279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.064976 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.069087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.071187 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.076605 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.083632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.105144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.216061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.488782 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.659704 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.031053 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerStarted","Data":"8c21bb8034d9faf7eb546bc39d481d9fb7112330466d208d373ff1d4cfc5503c"} Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032580 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" containerID="cri-o://69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001" gracePeriod=10 Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032942 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerStarted","Data":"5a7279bfcc6fbedda5247577044693b3e9c719e1402b5cf6df02c6d805661e2a"} Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032975 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032987 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.081251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" podStartSLOduration=3.081228565 podStartE2EDuration="3.081228565s" podCreationTimestamp="2026-02-02 10:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:14.054485729 +0000 UTC m=+1173.938678465" watchObservedRunningTime="2026-02-02 10:58:14.081228565 +0000 UTC m=+1173.965421281" Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.097721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.869915 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.050473 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerStarted","Data":"ea5226b97ad240049c9f39d23e381c57be0f9553f067d978ea59153b623b3d90"} Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.126959 4782 generic.go:334] "Generic (PLEG): container finished" podID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerID="235f00f5818c6de4755dcefb6a2d4359499a9277fdd4c9df3ba1b496dc87e676" exitCode=0 Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.127048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerDied","Data":"235f00f5818c6de4755dcefb6a2d4359499a9277fdd4c9df3ba1b496dc87e676"} Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.196828 4782 generic.go:334] "Generic (PLEG): container finished" podID="b226dd37-b5b5-4514-9495-944db6e760ed" containerID="69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001" exitCode=0 Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.197885 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerDied","Data":"69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001"} Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.272523 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335342 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335394 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335427 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335484 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.360165 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn" (OuterVolumeSpecName: "kube-api-access-x24sn") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "kube-api-access-x24sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.437071 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.467439 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.480202 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.483093 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.517415 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config" (OuterVolumeSpecName: "config") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540855 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540895 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540907 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540916 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.221221 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerStarted","Data":"e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.221515 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.232338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerDied","Data":"42049f171b77a573283352f44491fb48841eb34d9cc4039ea25a8c1b150ccf44"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.232660 4782 scope.go:117] "RemoveContainer" containerID="69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.232774 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.258694 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerStarted","Data":"d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.262358 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerStarted","Data":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.282997 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" podStartSLOduration=4.28297369 podStartE2EDuration="4.28297369s" podCreationTimestamp="2026-02-02 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:16.2460163 +0000 UTC m=+1176.130209016" watchObservedRunningTime="2026-02-02 10:58:16.28297369 +0000 UTC m=+1176.167166406" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.283700 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.297689 4782 scope.go:117] "RemoveContainer" containerID="fd4e9ce89e9962fa17cc57f028ec76e94e318eabc07d088e7fecd0989f0c912c" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.303356 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.838249 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" path="/var/lib/kubelet/pods/b226dd37-b5b5-4514-9495-944db6e760ed/volumes" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272773 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerStarted","Data":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272857 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272873 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" containerID="cri-o://0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" gracePeriod=30 Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272914 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" containerID="cri-o://810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" gracePeriod=30 Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.282221 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerStarted","Data":"1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a"} Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.304429 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.304411991 podStartE2EDuration="5.304411991s" podCreationTimestamp="2026-02-02 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:17.298583943 +0000 UTC m=+1177.182776659" watchObservedRunningTime="2026-02-02 10:58:17.304411991 +0000 UTC m=+1177.188604707" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.335896 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.276444872 podStartE2EDuration="5.335875132s" podCreationTimestamp="2026-02-02 10:58:12 +0000 UTC" firstStartedPulling="2026-02-02 10:58:13.520501481 +0000 UTC m=+1173.404694197" lastFinishedPulling="2026-02-02 10:58:14.579931741 +0000 UTC m=+1174.464124457" observedRunningTime="2026-02-02 10:58:17.327634106 +0000 UTC m=+1177.211826832" watchObservedRunningTime="2026-02-02 10:58:17.335875132 +0000 UTC m=+1177.220067848" Feb 02 10:58:17 crc kubenswrapper[4782]: E0202 10:58:17.525774 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43699695_b676_4b62_8714_c01390804d91.slice/crio-conmon-0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43699695_b676_4b62_8714_c01390804d91.slice/crio-0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.734441 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.220081 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.291301 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294271 4782 generic.go:334] "Generic (PLEG): container finished" podID="43699695-b676-4b62-8714-c01390804d91" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" exitCode=0 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294309 4782 generic.go:334] "Generic (PLEG): container finished" podID="43699695-b676-4b62-8714-c01390804d91" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" exitCode=143 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294495 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerDied","Data":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294529 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerDied","Data":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294542 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerDied","Data":"ea5226b97ad240049c9f39d23e381c57be0f9553f067d978ea59153b623b3d90"} Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294557 4782 scope.go:117] "RemoveContainer" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294793 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332364 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332416 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332620 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332658 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332703 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332769 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.335421 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.336854 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.352802 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.355002 4782 scope.go:117] "RemoveContainer" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.356879 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs" (OuterVolumeSpecName: "logs") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.383246 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd" (OuterVolumeSpecName: "kube-api-access-qzgjd") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "kube-api-access-qzgjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.408805 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts" (OuterVolumeSpecName: "scripts") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442921 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442950 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442960 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442970 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.451134 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.484798 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data" (OuterVolumeSpecName: "config-data") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.544615 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.544681 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.554814 4782 scope.go:117] "RemoveContainer" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.555346 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": container with ID starting with 810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd not found: ID does not exist" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555380 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} err="failed to get container status \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": rpc error: code = NotFound desc = could not find container \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": container with ID starting with 810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555406 4782 scope.go:117] "RemoveContainer" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.555814 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": container with ID starting with 0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9 not found: ID does not exist" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555839 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} err="failed to get container status \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": rpc error: code = NotFound desc = could not find container \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": container with ID starting with 0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9 not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555856 4782 scope.go:117] "RemoveContainer" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.556155 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} err="failed to get container status \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": rpc error: code = NotFound desc = could not find container \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": container with ID starting with 810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.556169 4782 scope.go:117] "RemoveContainer" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.556386 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} err="failed to get container status \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": rpc error: code = NotFound desc = could not find container \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": container with ID starting with 0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9 not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.572964 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.573403 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" containerID="cri-o://786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53" gracePeriod=30 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.574024 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" containerID="cri-o://f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c" gracePeriod=30 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.590032 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.141:9696/\": read tcp 10.217.0.2:38578->10.217.0.141:9696: read: connection reset by peer" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651265 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bdf8f4745-82ddm"] Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651630 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="init" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651679 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="init" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651716 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651722 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651728 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651734 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651741 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651749 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651900 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651913 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651930 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.656313 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.658580 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.679766 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.681941 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdf8f4745-82ddm"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.708921 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.710937 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.725742 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.725991 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.726098 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.729812 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750160 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-combined-ca-bundle\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750225 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-public-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-ovndb-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750290 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-httpd-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750340 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp52w\" (UniqueName: \"kubernetes.io/projected/ab6192fa-a576-411f-8083-2d6bfa57c39f-kube-api-access-zp52w\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750390 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-internal-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.843464 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43699695-b676-4b62-8714-c01390804d91" path="/var/lib/kubelet/pods/43699695-b676-4b62-8714-c01390804d91/volumes" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857572 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857632 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857676 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-scripts\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857701 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7zq\" (UniqueName: \"kubernetes.io/projected/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-kube-api-access-pp7zq\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857770 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-combined-ca-bundle\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-public-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857968 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-logs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-ovndb-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858038 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858106 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-httpd-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858147 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp52w\" (UniqueName: \"kubernetes.io/projected/ab6192fa-a576-411f-8083-2d6bfa57c39f-kube-api-access-zp52w\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858167 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858185 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858292 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-internal-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.870547 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-ovndb-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.870669 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-internal-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.870564 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-combined-ca-bundle\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.875725 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-public-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.876490 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.883263 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-httpd-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.894177 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp52w\" (UniqueName: \"kubernetes.io/projected/ab6192fa-a576-411f-8083-2d6bfa57c39f-kube-api-access-zp52w\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.964152 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.971948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972195 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-scripts\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972281 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7zq\" (UniqueName: \"kubernetes.io/projected/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-kube-api-access-pp7zq\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972444 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972537 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972787 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-logs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.973043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.975936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-logs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.968385 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.985158 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.985882 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.990311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.994041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-scripts\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.994886 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.005332 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7zq\" (UniqueName: \"kubernetes.io/projected/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-kube-api-access-pp7zq\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.005340 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.043806 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.070113 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.430427 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.760705 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdf8f4745-82ddm"] Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.774338 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.360249 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf8f4745-82ddm" event={"ID":"ab6192fa-a576-411f-8083-2d6bfa57c39f","Type":"ContainerStarted","Data":"432b7d36382afa67bf37cdf1d06d416b975f874f74041da2a3bf4687b71fad8c"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.360691 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf8f4745-82ddm" event={"ID":"ab6192fa-a576-411f-8083-2d6bfa57c39f","Type":"ContainerStarted","Data":"0b2df805536c1ecf0b92b90cd45b184370e0405e5aefc8742b965dfc34956403"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.366398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d71c3db-1389-4568-bb5e-c87dc6a60ddd","Type":"ContainerStarted","Data":"41916449d799b7018510e13a599c8b6f77fe467308b23a2242ecfcac2a84e8a1"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.381151 4782 generic.go:334] "Generic (PLEG): container finished" podID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerID="f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c" exitCode=0 Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.381194 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerDied","Data":"f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.795133 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.141:9696/\": dial tcp 10.217.0.141:9696: connect: connection refused" Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.397048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d71c3db-1389-4568-bb5e-c87dc6a60ddd","Type":"ContainerStarted","Data":"13ab1e74453151aeec36a4544b2ac740ed5f600cb10df6c778f71bab286e1215"} Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.404141 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf8f4745-82ddm" event={"ID":"ab6192fa-a576-411f-8083-2d6bfa57c39f","Type":"ContainerStarted","Data":"d13bff30fb9899b308dc138a8349977cbbd535521034249f1c4d00bfd79578fd"} Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.405160 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.431885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bdf8f4745-82ddm" podStartSLOduration=3.431863499 podStartE2EDuration="3.431863499s" podCreationTimestamp="2026-02-02 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:21.430620553 +0000 UTC m=+1181.314813269" watchObservedRunningTime="2026-02-02 10:58:21.431863499 +0000 UTC m=+1181.316056215" Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.663736 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:22 crc kubenswrapper[4782]: I0202 10:58:22.031616 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:22 crc kubenswrapper[4782]: I0202 10:58:22.479585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d71c3db-1389-4568-bb5e-c87dc6a60ddd","Type":"ContainerStarted","Data":"869e66423515b44b6ac5aeb661ead8d7d8996a659be8e6c4dd033373573dbf55"} Feb 02 10:58:22 crc kubenswrapper[4782]: I0202 10:58:22.480382 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.004876 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.031469 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.031447412 podStartE2EDuration="5.031447412s" podCreationTimestamp="2026-02-02 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:22.524171921 +0000 UTC m=+1182.408364657" watchObservedRunningTime="2026-02-02 10:58:23.031447412 +0000 UTC m=+1182.915640128" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.078761 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.078974 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" containerID="cri-o://57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143" gracePeriod=10 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.117085 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.144055 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.305051 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.539504 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerID="57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143" exitCode=0 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.539564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerDied","Data":"57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143"} Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.541423 4782 generic.go:334] "Generic (PLEG): container finished" podID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerID="786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53" exitCode=0 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.541599 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" containerID="cri-o://d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97" gracePeriod=30 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.549101 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerDied","Data":"786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53"} Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.550618 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" containerID="cri-o://1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a" gracePeriod=30 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.643043 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661356 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661517 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661600 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.662748 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.662796 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.662875 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.674431 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m" (OuterVolumeSpecName: "kube-api-access-7d72m") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "kube-api-access-7d72m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.683979 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.765118 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.765358 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.793019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config" (OuterVolumeSpecName: "config") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.797175 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.803216 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.806932 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.816838 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872300 4782 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872377 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872392 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872404 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872417 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.996031 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076470 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076572 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076807 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.098701 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp" (OuterVolumeSpecName: "kube-api-access-rmddp") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "kube-api-access-rmddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.185834 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.247368 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.266053 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config" (OuterVolumeSpecName: "config") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.277827 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.288626 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.288777 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.288788 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.299366 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.390953 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.519034 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.550230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerDied","Data":"d2d57fff99a40c3d971a276c962ad40364a2dc18610c2d3bd9d74bd06dd02f62"} Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.550275 4782 scope.go:117] "RemoveContainer" containerID="57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.550376 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.553602 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerDied","Data":"8889fd515dbda34f10b34a65e145848adfe2d17e55c2e3acb24297eefee67df3"} Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.553691 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.584568 4782 scope.go:117] "RemoveContainer" containerID="307a5c49cf6e90bbab2ae7599314e3e57ac09662374b82e043747eec646d2bdd" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.620664 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.635283 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.652261 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.666747 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.680482 4782 scope.go:117] "RemoveContainer" containerID="f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.715514 4782 scope.go:117] "RemoveContainer" containerID="786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.830622 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" path="/var/lib/kubelet/pods/7f6a8ebb-a211-4505-b934-3048a67b2f47/volumes" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.831520 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" path="/var/lib/kubelet/pods/c1b76222-36df-45a6-ac9f-edb412c8a2ad/volumes" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.947790 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.022020 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.022285 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" containerID="cri-o://985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573" gracePeriod=30 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.022550 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" containerID="cri-o://de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f" gracePeriod=30 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.382916 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.384735 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578409 4782 generic.go:334] "Generic (PLEG): container finished" podID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerID="1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a" exitCode=0 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578791 4782 generic.go:334] "Generic (PLEG): container finished" podID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerID="d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97" exitCode=0 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578471 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerDied","Data":"1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a"} Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578891 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerDied","Data":"d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97"} Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.583795 4782 generic.go:334] "Generic (PLEG): container finished" podID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerID="985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573" exitCode=143 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.583835 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerDied","Data":"985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573"} Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.719599 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-555cfb6c68-sntkc"] Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720007 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720027 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720060 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="init" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720067 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="init" Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720088 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720096 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720109 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720115 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720297 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720310 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720324 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.721353 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.741848 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-555cfb6c68-sntkc"] Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824572 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-scripts\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824625 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-internal-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824674 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-config-data\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824713 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9040c71d-579d-4f4e-99cf-bb76289b9aa3-logs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtv9\" (UniqueName: \"kubernetes.io/projected/9040c71d-579d-4f4e-99cf-bb76289b9aa3-kube-api-access-hjtv9\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824783 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-combined-ca-bundle\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824803 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-public-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.872895 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtv9\" (UniqueName: \"kubernetes.io/projected/9040c71d-579d-4f4e-99cf-bb76289b9aa3-kube-api-access-hjtv9\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926158 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-combined-ca-bundle\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-public-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-scripts\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926305 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-internal-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926334 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-config-data\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9040c71d-579d-4f4e-99cf-bb76289b9aa3-logs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.931058 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9040c71d-579d-4f4e-99cf-bb76289b9aa3-logs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.941582 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-public-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.946197 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-scripts\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.967522 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-config-data\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.967534 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-combined-ca-bundle\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.967947 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-internal-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.981344 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtv9\" (UniqueName: \"kubernetes.io/projected/9040c71d-579d-4f4e-99cf-bb76289b9aa3-kube-api-access-hjtv9\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028459 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028535 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028605 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028700 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028721 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.036130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.038779 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts" (OuterVolumeSpecName: "scripts") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.041030 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb" (OuterVolumeSpecName: "kube-api-access-htwrb") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "kube-api-access-htwrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.049186 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.063152 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.126122 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134596 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134629 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134650 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134659 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134668 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.605770 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerDied","Data":"5a7279bfcc6fbedda5247577044693b3e9c719e1402b5cf6df02c6d805661e2a"} Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.605809 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.605839 4782 scope.go:117] "RemoveContainer" containerID="1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.628761 4782 scope.go:117] "RemoveContainer" containerID="d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.024172 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data" (OuterVolumeSpecName: "config-data") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.050661 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.252033 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.266686 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.331779 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: E0202 10:58:27.332139 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332150 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" Feb 02 10:58:27 crc kubenswrapper[4782]: E0202 10:58:27.332166 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332172 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332315 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332329 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.333226 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.340394 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.342536 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.356902 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.356950 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357004 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357029 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llrg7\" (UniqueName: \"kubernetes.io/projected/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-kube-api-access-llrg7\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357119 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458176 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458228 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458259 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458303 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458340 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llrg7\" (UniqueName: \"kubernetes.io/projected/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-kube-api-access-llrg7\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458373 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.460160 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.467080 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.471386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.471839 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.477386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.482309 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llrg7\" (UniqueName: \"kubernetes.io/projected/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-kube-api-access-llrg7\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.532967 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-555cfb6c68-sntkc"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.632279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-555cfb6c68-sntkc" event={"ID":"9040c71d-579d-4f4e-99cf-bb76289b9aa3","Type":"ContainerStarted","Data":"a0be2b1a93ee53e9d262edc76469a71cac4773ff8420a23dfd775e046a7d0049"} Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.672922 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.165955 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.429936 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.430190 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.652335 4782 generic.go:334] "Generic (PLEG): container finished" podID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerID="de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f" exitCode=0 Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.652440 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerDied","Data":"de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.693041 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-555cfb6c68-sntkc" event={"ID":"9040c71d-579d-4f4e-99cf-bb76289b9aa3","Type":"ContainerStarted","Data":"35761f96fdfa68040ac16f88eb7bb841866041d408860331272083c2682a2947"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.693118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-555cfb6c68-sntkc" event={"ID":"9040c71d-579d-4f4e-99cf-bb76289b9aa3","Type":"ContainerStarted","Data":"25edc0c259aa7a2af7d10f2cfb199e9dba5d3f9e464ed5556e8c20ca05526a89"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.696139 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.696205 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.722429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff","Type":"ContainerStarted","Data":"31cec05d225286ce8a6a15663554bcab5d2740787f3c678706d66aced7845935"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.726486 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-555cfb6c68-sntkc" podStartSLOduration=3.7264637350000003 podStartE2EDuration="3.726463735s" podCreationTimestamp="2026-02-02 10:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:28.713946686 +0000 UTC m=+1188.598139392" watchObservedRunningTime="2026-02-02 10:58:28.726463735 +0000 UTC m=+1188.610656451" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.854737 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" path="/var/lib/kubelet/pods/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f/volumes" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.919160 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.995670 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.995911 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.995984 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996038 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996095 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs" (OuterVolumeSpecName: "logs") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996476 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.016928 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.023406 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp" (OuterVolumeSpecName: "kube-api-access-xkkzp") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "kube-api-access-xkkzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.038910 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.068826 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data" (OuterVolumeSpecName: "config-data") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099631 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099685 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099700 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099709 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.734653 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff","Type":"ContainerStarted","Data":"b2f6e262f9c7877d00519b4042ce035463f728ae3abc96e71e36bb44e8a6796e"} Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.734694 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff","Type":"ContainerStarted","Data":"29b316a5f1a17523790f899734faffe6f0ffdfd126e485f5490c426ba69f458f"} Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.739591 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerDied","Data":"486044541d5c070266883b9d8c5a598bb41438b6bc2f68afedb4cd643ff3c9ee"} Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.739660 4782 scope.go:117] "RemoveContainer" containerID="de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.739859 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.761608 4782 scope.go:117] "RemoveContainer" containerID="985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.771937 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.771919654 podStartE2EDuration="2.771919654s" podCreationTimestamp="2026-02-02 10:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:29.769634029 +0000 UTC m=+1189.653826735" watchObservedRunningTime="2026-02-02 10:58:29.771919654 +0000 UTC m=+1189.656112370" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.793164 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.799190 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:30 crc kubenswrapper[4782]: I0202 10:58:30.342851 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:58:30 crc kubenswrapper[4782]: I0202 10:58:30.832385 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" path="/var/lib/kubelet/pods/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0/volumes" Feb 02 10:58:31 crc kubenswrapper[4782]: I0202 10:58:31.434712 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 10:58:32 crc kubenswrapper[4782]: I0202 10:58:32.673814 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.087195 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 10:58:35 crc kubenswrapper[4782]: E0202 10:58:35.087920 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.087941 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" Feb 02 10:58:35 crc kubenswrapper[4782]: E0202 10:58:35.087958 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.087967 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.088173 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.088187 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.088874 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.097147 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.097574 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.098215 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.109788 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-92ndr" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.204819 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.204907 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhcr\" (UniqueName: \"kubernetes.io/projected/7ed19b68-33c0-45b1-acbc-b6e9def4e565-kube-api-access-rzhcr\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.204938 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.205127 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306303 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306419 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306462 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhcr\" (UniqueName: \"kubernetes.io/projected/7ed19b68-33c0-45b1-acbc-b6e9def4e565-kube-api-access-rzhcr\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306484 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.307556 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.312391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.327813 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.332671 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhcr\" (UniqueName: \"kubernetes.io/projected/7ed19b68-33c0-45b1-acbc-b6e9def4e565-kube-api-access-rzhcr\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.441999 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.977070 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 10:58:36 crc kubenswrapper[4782]: W0202 10:58:36.004230 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ed19b68_33c0_45b1_acbc_b6e9def4e565.slice/crio-93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135 WatchSource:0}: Error finding container 93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135: Status 404 returned error can't find the container with id 93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135 Feb 02 10:58:36 crc kubenswrapper[4782]: I0202 10:58:36.817950 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ed19b68-33c0-45b1-acbc-b6e9def4e565","Type":"ContainerStarted","Data":"93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135"} Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.381121 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.549505 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.549595 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.549927 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550008 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550061 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550220 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550251 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550444 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550733 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550978 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550990 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.565853 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht" (OuterVolumeSpecName: "kube-api-access-g5tht") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "kube-api-access-g5tht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.586181 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.591924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts" (OuterVolumeSpecName: "scripts") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.652594 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.652654 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.652668 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.672115 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.722500 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data" (OuterVolumeSpecName: "config-data") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.754243 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.754278 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869345 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" exitCode=137 Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869632 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96"} Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869690 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"995e3d21dc8fc39f728f7ea640cf5b2814a34afafd0aba1f79572a1482443e61"} Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869741 4782 scope.go:117] "RemoveContainer" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869924 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.915618 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.929393 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.933313 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.934056 4782 scope.go:117] "RemoveContainer" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.949727 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950193 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950213 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950227 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950233 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950243 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950250 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950259 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950264 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950443 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950460 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950471 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950484 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.952143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.955068 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.955260 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.967136 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.967178 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968510 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968562 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968672 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968716 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.969116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.983537 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.027832 4782 scope.go:117] "RemoveContainer" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070404 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070480 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070510 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.071287 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.071600 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.075094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.080605 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.083852 4782 scope.go:117] "RemoveContainer" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.083908 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.087238 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.121609 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.180140 4782 scope.go:117] "RemoveContainer" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.180584 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb720ee_de8d_42e4_b189_aa3d58478ab9.slice/crio-995e3d21dc8fc39f728f7ea640cf5b2814a34afafd0aba1f79572a1482443e61\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb720ee_de8d_42e4_b189_aa3d58478ab9.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.183465 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96\": container with ID starting with 140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96 not found: ID does not exist" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.183514 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96"} err="failed to get container status \"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96\": rpc error: code = NotFound desc = could not find container \"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96\": container with ID starting with 140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96 not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.183545 4782 scope.go:117] "RemoveContainer" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.206108 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f\": container with ID starting with 204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f not found: ID does not exist" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.206175 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f"} err="failed to get container status \"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f\": rpc error: code = NotFound desc = could not find container \"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f\": container with ID starting with 204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.206202 4782 scope.go:117] "RemoveContainer" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.232947 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9\": container with ID starting with 64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9 not found: ID does not exist" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.233009 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9"} err="failed to get container status \"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9\": rpc error: code = NotFound desc = could not find container \"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9\": container with ID starting with 64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9 not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.233037 4782 scope.go:117] "RemoveContainer" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.233385 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08\": container with ID starting with cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08 not found: ID does not exist" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.233404 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08"} err="failed to get container status \"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08\": rpc error: code = NotFound desc = could not find container \"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08\": container with ID starting with cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08 not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.328438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.830795 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" path="/var/lib/kubelet/pods/8eb720ee-de8d-42e4-b189-aa3d58478ab9/volumes" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.867885 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:38 crc kubenswrapper[4782]: W0202 10:58:38.877818 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce5deca8_5a47_4769_9518_5cb398a7cf5c.slice/crio-4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83 WatchSource:0}: Error finding container 4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83: Status 404 returned error can't find the container with id 4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83 Feb 02 10:58:39 crc kubenswrapper[4782]: I0202 10:58:39.893254 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83"} Feb 02 10:58:40 crc kubenswrapper[4782]: I0202 10:58:40.907129 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101"} Feb 02 10:58:46 crc kubenswrapper[4782]: I0202 10:58:46.496213 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:47 crc kubenswrapper[4782]: I0202 10:58:47.974001 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ed19b68-33c0-45b1-acbc-b6e9def4e565","Type":"ContainerStarted","Data":"881ba63078bc6f671dfef70e53653228b002023c4e2b67b4803da5f486d5d5c2"} Feb 02 10:58:47 crc kubenswrapper[4782]: I0202 10:58:47.977753 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6"} Feb 02 10:58:47 crc kubenswrapper[4782]: I0202 10:58:47.997945 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.949726002 podStartE2EDuration="12.997930209s" podCreationTimestamp="2026-02-02 10:58:35 +0000 UTC" firstStartedPulling="2026-02-02 10:58:36.009721756 +0000 UTC m=+1195.893914472" lastFinishedPulling="2026-02-02 10:58:47.057925963 +0000 UTC m=+1206.942118679" observedRunningTime="2026-02-02 10:58:47.995913382 +0000 UTC m=+1207.880106098" watchObservedRunningTime="2026-02-02 10:58:47.997930209 +0000 UTC m=+1207.882122925" Feb 02 10:58:48 crc kubenswrapper[4782]: I0202 10:58:48.987661 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003"} Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.062921 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.148285 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.148567 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4497f454-mphzd" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" containerID="cri-o://d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.149090 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4497f454-mphzd" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" containerID="cri-o://0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.995936 4782 generic.go:334] "Generic (PLEG): container finished" podID="64a58e87-7403-40ee-804f-3ddd256a166a" containerID="0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda" exitCode=0 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.996001 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerDied","Data":"0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda"} Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999341 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1"} Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999575 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" containerID="cri-o://533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999605 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" containerID="cri-o://04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999488 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" containerID="cri-o://84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999581 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" containerID="cri-o://b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" gracePeriod=30 Feb 02 10:58:50 crc kubenswrapper[4782]: I0202 10:58:50.000161 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:58:50 crc kubenswrapper[4782]: I0202 10:58:50.039383 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.178028948 podStartE2EDuration="13.039359528s" podCreationTimestamp="2026-02-02 10:58:37 +0000 UTC" firstStartedPulling="2026-02-02 10:58:38.880443708 +0000 UTC m=+1198.764636424" lastFinishedPulling="2026-02-02 10:58:49.741774288 +0000 UTC m=+1209.625967004" observedRunningTime="2026-02-02 10:58:50.028219829 +0000 UTC m=+1209.912412545" watchObservedRunningTime="2026-02-02 10:58:50.039359528 +0000 UTC m=+1209.923552244" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014179 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" exitCode=2 Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014812 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" exitCode=0 Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014877 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" exitCode=0 Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003"} Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014997 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6"} Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.015057 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101"} Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.664884 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.666080 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.682315 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.759691 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.761210 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.782246 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.790720 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.792049 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.806829 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.806916 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.807179 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.842481 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.880043 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.883099 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.890752 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909077 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909203 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909391 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.911521 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.937703 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.979930 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.981300 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.983928 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.986308 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.993650 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012231 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012285 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012354 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012383 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.013335 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.013438 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.031801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.060160 4782 generic.go:334] "Generic (PLEG): container finished" podID="64a58e87-7403-40ee-804f-3ddd256a166a" containerID="d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe" exitCode=0 Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.060228 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerDied","Data":"d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe"} Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.064429 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.082010 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.110967 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.114494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.114767 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.114917 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.115322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.116713 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.138631 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.172143 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.173258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.176560 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.208448 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.234216 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.234279 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.243602 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.254725 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.280291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.338218 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.338301 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.443469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.443559 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.444610 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.464844 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.566576 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.574995 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.599441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.647881 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.647955 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.648020 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.648142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.648187 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.658165 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh" (OuterVolumeSpecName: "kube-api-access-nj6hh") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "kube-api-access-nj6hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.664270 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.714726 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config" (OuterVolumeSpecName: "config") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.734524 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.753529 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759103 4782 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759157 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759168 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759177 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759186 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.791597 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 10:58:52 crc kubenswrapper[4782]: W0202 10:58:52.815946 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b55df6c_8971_415a_a934_0ec48a149b81.slice/crio-8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31 WatchSource:0}: Error finding container 8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31: Status 404 returned error can't find the container with id 8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31 Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.897476 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.917861 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.935828 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.957208 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.957243 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.082395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerDied","Data":"320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.082735 4782 scope.go:117] "RemoveContainer" containerID="0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.082853 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.092983 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-964hl" event={"ID":"6a9a0fe2-4862-47e1-91d0-553d95235f39","Type":"ContainerStarted","Data":"3c54228f50eb200a684b5560277db9357aea7a42d6d5f73377c7a90387ccfc5b"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.096197 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-627f-account-create-update-h6hdk" event={"ID":"a9b75d8c-9435-483f-8e95-97690314cfb5","Type":"ContainerStarted","Data":"11b0ee4f356c606379fc2116b6e57a4beff8c051011c9cabd92f2c5770470a9b"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.114542 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jnw6j" event={"ID":"c5eccd3e-f895-4c2f-a1e5-c337a89d2439","Type":"ContainerStarted","Data":"2b78e6443e5dee55482e6b131b07bfb875d511a978bb0f266a573d8cf761933e"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.115966 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j8z8n" event={"ID":"8b55df6c-8971-415a-a934-0ec48a149b81","Type":"ContainerStarted","Data":"8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.156798 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.170723 4782 scope.go:117] "RemoveContainer" containerID="d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.185405 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.224305 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 10:58:53 crc kubenswrapper[4782]: W0202 10:58:53.225341 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0abc6f3c_1f7d_4f48_8beb_205307984cdc.slice/crio-6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe WatchSource:0}: Error finding container 6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe: Status 404 returned error can't find the container with id 6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.246789 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.148920 4782 generic.go:334] "Generic (PLEG): container finished" podID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerID="ec0e250135ad643a0376384574da7a7800b3dd64125badf008a16dd100e20d1b" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.149117 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-964hl" event={"ID":"6a9a0fe2-4862-47e1-91d0-553d95235f39","Type":"ContainerDied","Data":"ec0e250135ad643a0376384574da7a7800b3dd64125badf008a16dd100e20d1b"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.156056 4782 generic.go:334] "Generic (PLEG): container finished" podID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerID="0e49e7a8577a45cc63b62f6e59ab36faf5118b4383cec84de5d8b281d39fd041" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.156130 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-627f-account-create-update-h6hdk" event={"ID":"a9b75d8c-9435-483f-8e95-97690314cfb5","Type":"ContainerDied","Data":"0e49e7a8577a45cc63b62f6e59ab36faf5118b4383cec84de5d8b281d39fd041"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.186045 4782 generic.go:334] "Generic (PLEG): container finished" podID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerID="7930f426b6752d1ae7cd1f189cd59a47f3d0e3b099f35ef79f6b78e86ed5ab0d" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.186145 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" event={"ID":"07bbffca-46a4-4693-ae3f-011a5ee0e317","Type":"ContainerDied","Data":"7930f426b6752d1ae7cd1f189cd59a47f3d0e3b099f35ef79f6b78e86ed5ab0d"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.186172 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" event={"ID":"07bbffca-46a4-4693-ae3f-011a5ee0e317","Type":"ContainerStarted","Data":"3299d9c6ed3caafaaffaf3c18e551c63a3c2d8756d2d26fa5022af14102dc560"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.195692 4782 generic.go:334] "Generic (PLEG): container finished" podID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerID="23b00d976eb1b671e56fad532c67af4f3f0fb48695bf8d78a64de1654d16975f" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.195756 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jnw6j" event={"ID":"c5eccd3e-f895-4c2f-a1e5-c337a89d2439","Type":"ContainerDied","Data":"23b00d976eb1b671e56fad532c67af4f3f0fb48695bf8d78a64de1654d16975f"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.214576 4782 generic.go:334] "Generic (PLEG): container finished" podID="8b55df6c-8971-415a-a934-0ec48a149b81" containerID="938ebd6bbe46ccc6431b3d92e3b6f8803ade372fd58b9fe07b9f065675fc25c4" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.214686 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j8z8n" event={"ID":"8b55df6c-8971-415a-a934-0ec48a149b81","Type":"ContainerDied","Data":"938ebd6bbe46ccc6431b3d92e3b6f8803ade372fd58b9fe07b9f065675fc25c4"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.220025 4782 generic.go:334] "Generic (PLEG): container finished" podID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerID="59303b0d2ccdb82f829b283e200498f7a3b29c09b53180da767f3025ed87821c" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.220159 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" event={"ID":"0abc6f3c-1f7d-4f48-8beb-205307984cdc","Type":"ContainerDied","Data":"59303b0d2ccdb82f829b283e200498f7a3b29c09b53180da767f3025ed87821c"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.220361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" event={"ID":"0abc6f3c-1f7d-4f48-8beb-205307984cdc","Type":"ContainerStarted","Data":"6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.831579 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" path="/var/lib/kubelet/pods/64a58e87-7403-40ee-804f-3ddd256a166a/volumes" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.662856 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.731252 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.731350 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.733047 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0abc6f3c-1f7d-4f48-8beb-205307984cdc" (UID: "0abc6f3c-1f7d-4f48-8beb-205307984cdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.774002 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj" (OuterVolumeSpecName: "kube-api-access-tvppj") pod "0abc6f3c-1f7d-4f48-8beb-205307984cdc" (UID: "0abc6f3c-1f7d-4f48-8beb-205307984cdc"). InnerVolumeSpecName "kube-api-access-tvppj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.835759 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.835781 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.899440 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.908848 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.931577 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.963310 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.975400 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038032 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038421 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"07bbffca-46a4-4693-ae3f-011a5ee0e317\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038468 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"a9b75d8c-9435-483f-8e95-97690314cfb5\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038553 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"07bbffca-46a4-4693-ae3f-011a5ee0e317\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038737 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"6a9a0fe2-4862-47e1-91d0-553d95235f39\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038790 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"8b55df6c-8971-415a-a934-0ec48a149b81\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038890 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"a9b75d8c-9435-483f-8e95-97690314cfb5\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038884 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5eccd3e-f895-4c2f-a1e5-c337a89d2439" (UID: "c5eccd3e-f895-4c2f-a1e5-c337a89d2439"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"8b55df6c-8971-415a-a934-0ec48a149b81\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"6a9a0fe2-4862-47e1-91d0-553d95235f39\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.039225 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a9a0fe2-4862-47e1-91d0-553d95235f39" (UID: "6a9a0fe2-4862-47e1-91d0-553d95235f39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.039419 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.039435 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.040072 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07bbffca-46a4-4693-ae3f-011a5ee0e317" (UID: "07bbffca-46a4-4693-ae3f-011a5ee0e317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.041657 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9b75d8c-9435-483f-8e95-97690314cfb5" (UID: "a9b75d8c-9435-483f-8e95-97690314cfb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.043070 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b55df6c-8971-415a-a934-0ec48a149b81" (UID: "8b55df6c-8971-415a-a934-0ec48a149b81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.043487 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc" (OuterVolumeSpecName: "kube-api-access-cr2lc") pod "8b55df6c-8971-415a-a934-0ec48a149b81" (UID: "8b55df6c-8971-415a-a934-0ec48a149b81"). InnerVolumeSpecName "kube-api-access-cr2lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.044379 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4" (OuterVolumeSpecName: "kube-api-access-mxnv4") pod "a9b75d8c-9435-483f-8e95-97690314cfb5" (UID: "a9b75d8c-9435-483f-8e95-97690314cfb5"). InnerVolumeSpecName "kube-api-access-mxnv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.044745 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk" (OuterVolumeSpecName: "kube-api-access-4pbvk") pod "6a9a0fe2-4862-47e1-91d0-553d95235f39" (UID: "6a9a0fe2-4862-47e1-91d0-553d95235f39"). InnerVolumeSpecName "kube-api-access-4pbvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.045141 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48" (OuterVolumeSpecName: "kube-api-access-xmz48") pod "07bbffca-46a4-4693-ae3f-011a5ee0e317" (UID: "07bbffca-46a4-4693-ae3f-011a5ee0e317"). InnerVolumeSpecName "kube-api-access-xmz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.053042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs" (OuterVolumeSpecName: "kube-api-access-c6lxs") pod "c5eccd3e-f895-4c2f-a1e5-c337a89d2439" (UID: "c5eccd3e-f895-4c2f-a1e5-c337a89d2439"). InnerVolumeSpecName "kube-api-access-c6lxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140460 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140686 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140750 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140806 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140885 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140937 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140998 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.141052 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.238134 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jnw6j" event={"ID":"c5eccd3e-f895-4c2f-a1e5-c337a89d2439","Type":"ContainerDied","Data":"2b78e6443e5dee55482e6b131b07bfb875d511a978bb0f266a573d8cf761933e"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.238185 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.238206 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b78e6443e5dee55482e6b131b07bfb875d511a978bb0f266a573d8cf761933e" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.240704 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j8z8n" event={"ID":"8b55df6c-8971-415a-a934-0ec48a149b81","Type":"ContainerDied","Data":"8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.240756 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.240765 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.243956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" event={"ID":"0abc6f3c-1f7d-4f48-8beb-205307984cdc","Type":"ContainerDied","Data":"6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.244096 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.244006 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.246308 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-964hl" event={"ID":"6a9a0fe2-4862-47e1-91d0-553d95235f39","Type":"ContainerDied","Data":"3c54228f50eb200a684b5560277db9357aea7a42d6d5f73377c7a90387ccfc5b"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.246353 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c54228f50eb200a684b5560277db9357aea7a42d6d5f73377c7a90387ccfc5b" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.246404 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.256721 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.256741 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-627f-account-create-update-h6hdk" event={"ID":"a9b75d8c-9435-483f-8e95-97690314cfb5","Type":"ContainerDied","Data":"11b0ee4f356c606379fc2116b6e57a4beff8c051011c9cabd92f2c5770470a9b"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.257328 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b0ee4f356c606379fc2116b6e57a4beff8c051011c9cabd92f2c5770470a9b" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.258652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" event={"ID":"07bbffca-46a4-4693-ae3f-011a5ee0e317","Type":"ContainerDied","Data":"3299d9c6ed3caafaaffaf3c18e551c63a3c2d8756d2d26fa5022af14102dc560"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.258684 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3299d9c6ed3caafaaffaf3c18e551c63a3c2d8756d2d26fa5022af14102dc560" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.258743 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.307188 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.333345 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.432026 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.432264 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-54577c875b-pcjgd" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" containerID="cri-o://ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" gracePeriod=30 Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.432738 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-54577c875b-pcjgd" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" containerID="cri-o://b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" gracePeriod=30 Feb 02 10:58:58 crc kubenswrapper[4782]: I0202 10:58:58.275713 4782 generic.go:334] "Generic (PLEG): container finished" podID="060c1eb2-7773-4122-8725-bf421f0feaac" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" exitCode=143 Feb 02 10:58:58 crc kubenswrapper[4782]: I0202 10:58:58.275806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerDied","Data":"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d"} Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.036895 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146690 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146827 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146872 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146998 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.147078 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.147138 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.148711 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs" (OuterVolumeSpecName: "logs") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.155833 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm" (OuterVolumeSpecName: "kube-api-access-p6nnm") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "kube-api-access-p6nnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.155964 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts" (OuterVolumeSpecName: "scripts") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.204164 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.223078 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data" (OuterVolumeSpecName: "config-data") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249459 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249756 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249828 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249897 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249963 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.253488 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.259747 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311221 4782 generic.go:334] "Generic (PLEG): container finished" podID="060c1eb2-7773-4122-8725-bf421f0feaac" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" exitCode=0 Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerDied","Data":"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5"} Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311322 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerDied","Data":"cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7"} Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311363 4782 scope.go:117] "RemoveContainer" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311592 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.339328 4782 scope.go:117] "RemoveContainer" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.352861 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.352916 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.362513 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.372099 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.373810 4782 scope.go:117] "RemoveContainer" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" Feb 02 10:59:01 crc kubenswrapper[4782]: E0202 10:59:01.374372 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5\": container with ID starting with b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5 not found: ID does not exist" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.374836 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5"} err="failed to get container status \"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5\": rpc error: code = NotFound desc = could not find container \"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5\": container with ID starting with b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5 not found: ID does not exist" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.374865 4782 scope.go:117] "RemoveContainer" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" Feb 02 10:59:01 crc kubenswrapper[4782]: E0202 10:59:01.375271 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d\": container with ID starting with ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d not found: ID does not exist" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.375343 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d"} err="failed to get container status \"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d\": rpc error: code = NotFound desc = could not find container \"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d\": container with ID starting with ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d not found: ID does not exist" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.158808 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159218 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159235 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159252 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159261 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159274 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159281 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159301 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159308 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159321 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159328 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159338 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159345 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159353 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159359 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159369 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159376 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159390 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159397 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159418 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159425 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159616 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159684 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159704 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159721 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159738 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159755 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159765 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159779 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159787 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159810 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.160500 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.162235 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.162748 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nhbbk" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.162937 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.174480 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269198 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269250 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269421 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269470 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.372205 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.372283 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.372782 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.373067 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.376814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.377333 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.377442 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.399383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.479783 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.832273 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" path="/var/lib/kubelet/pods/060c1eb2-7773-4122-8725-bf421f0feaac/volumes" Feb 02 10:59:03 crc kubenswrapper[4782]: I0202 10:59:03.009068 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 10:59:03 crc kubenswrapper[4782]: I0202 10:59:03.344020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerStarted","Data":"9ccdb7941abab6279a97836bdd10fb3890d96e1f11f67732e40927619634d91c"} Feb 02 10:59:08 crc kubenswrapper[4782]: I0202 10:59:08.333235 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:59:10 crc kubenswrapper[4782]: I0202 10:59:10.406244 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerStarted","Data":"c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45"} Feb 02 10:59:10 crc kubenswrapper[4782]: I0202 10:59:10.425941 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" podStartSLOduration=1.33637636 podStartE2EDuration="8.425918088s" podCreationTimestamp="2026-02-02 10:59:02 +0000 UTC" firstStartedPulling="2026-02-02 10:59:03.009470639 +0000 UTC m=+1222.893663355" lastFinishedPulling="2026-02-02 10:59:10.099012327 +0000 UTC m=+1229.983205083" observedRunningTime="2026-02-02 10:59:10.423229811 +0000 UTC m=+1230.307422527" watchObservedRunningTime="2026-02-02 10:59:10.425918088 +0000 UTC m=+1230.310110794" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.409175 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493015 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" exitCode=137 Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493061 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1"} Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83"} Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493117 4782 scope.go:117] "RemoveContainer" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493302 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499494 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499737 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499785 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499834 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499937 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.500797 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.501096 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.522073 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts" (OuterVolumeSpecName: "scripts") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.531921 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9" (OuterVolumeSpecName: "kube-api-access-89hm9") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "kube-api-access-89hm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.534101 4782 scope.go:117] "RemoveContainer" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.539849 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602238 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602274 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602286 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602298 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602306 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602343 4782 scope.go:117] "RemoveContainer" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.611066 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.623447 4782 scope.go:117] "RemoveContainer" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.639463 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data" (OuterVolumeSpecName: "config-data") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.646848 4782 scope.go:117] "RemoveContainer" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.647380 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1\": container with ID starting with 533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1 not found: ID does not exist" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.647511 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1"} err="failed to get container status \"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1\": rpc error: code = NotFound desc = could not find container \"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1\": container with ID starting with 533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.647619 4782 scope.go:117] "RemoveContainer" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.648284 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003\": container with ID starting with 04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003 not found: ID does not exist" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.648314 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003"} err="failed to get container status \"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003\": rpc error: code = NotFound desc = could not find container \"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003\": container with ID starting with 04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.648335 4782 scope.go:117] "RemoveContainer" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.648878 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6\": container with ID starting with b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6 not found: ID does not exist" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.648987 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6"} err="failed to get container status \"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6\": rpc error: code = NotFound desc = could not find container \"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6\": container with ID starting with b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.649056 4782 scope.go:117] "RemoveContainer" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.649379 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101\": container with ID starting with 84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101 not found: ID does not exist" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.649404 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101"} err="failed to get container status \"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101\": rpc error: code = NotFound desc = could not find container \"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101\": container with ID starting with 84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.745483 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.745515 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.843443 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.863738 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.873538 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874000 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874025 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874053 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874063 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874081 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874089 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874108 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874115 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874502 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874542 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874559 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874577 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.879240 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.885354 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.888836 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.889262 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.050994 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051315 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051375 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051403 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051612 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.152986 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153026 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153098 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153131 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153151 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153198 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153217 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153867 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153958 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.160375 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.161694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.163692 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.167265 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.175876 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.205780 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.671013 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:21 crc kubenswrapper[4782]: W0202 10:59:21.677884 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28015087_432c_4906_8c57_406f5bf4371b.slice/crio-130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248 WatchSource:0}: Error finding container 130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248: Status 404 returned error can't find the container with id 130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248 Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.517762 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff"} Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.518224 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248"} Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.836099 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" path="/var/lib/kubelet/pods/ce5deca8-5a47-4769-9518-5cb398a7cf5c/volumes" Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.951201 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.951528 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:59:23 crc kubenswrapper[4782]: I0202 10:59:23.527916 4782 generic.go:334] "Generic (PLEG): container finished" podID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerID="c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45" exitCode=0 Feb 02 10:59:23 crc kubenswrapper[4782]: I0202 10:59:23.528017 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerDied","Data":"c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45"} Feb 02 10:59:23 crc kubenswrapper[4782]: I0202 10:59:23.533758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71"} Feb 02 10:59:24 crc kubenswrapper[4782]: I0202 10:59:24.544957 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9"} Feb 02 10:59:24 crc kubenswrapper[4782]: I0202 10:59:24.954599 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036443 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036722 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036778 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036899 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.046809 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts" (OuterVolumeSpecName: "scripts") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.046854 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr" (OuterVolumeSpecName: "kube-api-access-k8wsr") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "kube-api-access-k8wsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.072019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.072158 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data" (OuterVolumeSpecName: "config-data") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139323 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139591 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139751 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139857 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.570802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerDied","Data":"9ccdb7941abab6279a97836bdd10fb3890d96e1f11f67732e40927619634d91c"} Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.570844 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccdb7941abab6279a97836bdd10fb3890d96e1f11f67732e40927619634d91c" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.570882 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.663100 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:59:25 crc kubenswrapper[4782]: E0202 10:59:25.663540 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerName="nova-cell0-conductor-db-sync" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.663565 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerName="nova-cell0-conductor-db-sync" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.663812 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerName="nova-cell0-conductor-db-sync" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.664475 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.666253 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nhbbk" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.667519 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.678332 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.751489 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.752935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgp2s\" (UniqueName: \"kubernetes.io/projected/ea60fa1f-5751-4f93-8726-ce0c4be54577-kube-api-access-zgp2s\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.753164 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.854979 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgp2s\" (UniqueName: \"kubernetes.io/projected/ea60fa1f-5751-4f93-8726-ce0c4be54577-kube-api-access-zgp2s\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.855091 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.855135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.860774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.865464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.873475 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgp2s\" (UniqueName: \"kubernetes.io/projected/ea60fa1f-5751-4f93-8726-ce0c4be54577-kube-api-access-zgp2s\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.989224 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.487529 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.586080 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea60fa1f-5751-4f93-8726-ce0c4be54577","Type":"ContainerStarted","Data":"eae176bd25849e23d16488dbd8df7e5b07b132ca266dc4acf10a7eb4d7f78d50"} Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.591157 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98"} Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.591556 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.614601 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.53184408 podStartE2EDuration="6.614580396s" podCreationTimestamp="2026-02-02 10:59:20 +0000 UTC" firstStartedPulling="2026-02-02 10:59:21.680499168 +0000 UTC m=+1241.564691894" lastFinishedPulling="2026-02-02 10:59:25.763235494 +0000 UTC m=+1245.647428210" observedRunningTime="2026-02-02 10:59:26.609641004 +0000 UTC m=+1246.493833740" watchObservedRunningTime="2026-02-02 10:59:26.614580396 +0000 UTC m=+1246.498773112" Feb 02 10:59:27 crc kubenswrapper[4782]: I0202 10:59:27.600972 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea60fa1f-5751-4f93-8726-ce0c4be54577","Type":"ContainerStarted","Data":"24403be4fefe81ba978fb152610f0aa6f4ed6fbd4dd7f5010c25f8a6bc48717f"} Feb 02 10:59:27 crc kubenswrapper[4782]: I0202 10:59:27.601445 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:27 crc kubenswrapper[4782]: I0202 10:59:27.627304 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.627285695 podStartE2EDuration="2.627285695s" podCreationTimestamp="2026-02-02 10:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:27.619366918 +0000 UTC m=+1247.503559634" watchObservedRunningTime="2026-02-02 10:59:27.627285695 +0000 UTC m=+1247.511478401" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.018290 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.498584 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.499906 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.502188 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.502369 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.516065 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.582900 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.583001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.583031 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.583162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685482 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685636 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685698 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685725 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.708559 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.715194 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.727571 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.728046 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.733795 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.735007 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.741084 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.797852 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.821389 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.887743 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.889521 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.894625 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.902808 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.902997 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.899190 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.918725 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.947747 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.949514 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.958638 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.005179 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.005843 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006312 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006356 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006401 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.035843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.042361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.059419 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.065314 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.098751 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.100487 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.106080 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.119468 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.119712 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.119943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120041 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120090 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120146 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120341 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.121808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.122929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.135874 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.144837 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.158393 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.173739 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233978 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233997 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.234029 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.234049 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.234101 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.238005 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.240711 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.246315 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.253858 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.258995 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.286975 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.288573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335548 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335620 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335659 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335678 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335707 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.340144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.341407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.364850 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.367179 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.408119 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.451126 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462403 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462589 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.463004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.464180 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.464208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.465936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.466374 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.488473 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.589834 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.652226 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.954041 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.990131 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.113295 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.119284 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.128544 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.128848 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.135471 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.187966 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.188024 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.188069 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.188514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.283807 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290669 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290713 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290772 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.296718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.299025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.301196 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.312848 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.408785 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.427393 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.449724 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.789787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerStarted","Data":"259595f6180ca19619e9584218a7595adeeee90339f24e0dc184cf1c1a9dd391"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.795156 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerStarted","Data":"0204128d8c9334e74821e1d19a2b03e4db6f2b1212375a7ff2103159f638b687"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.795196 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerStarted","Data":"c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.795213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerStarted","Data":"40e8da6bacf81b0807d18f0e00cf0e73a4f50618c9435b4a018769c28384c37e"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.797302 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerStarted","Data":"b7dc98ebc02a669721b0d5719711fa47b0524703f51ae30735f586363eb204e3"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.810222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerStarted","Data":"6984fb97283161100b3f2ea9d0020d436f7eec946eff63636e874e1d070b241f"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.853392 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerStarted","Data":"730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.853454 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerStarted","Data":"7dccd6a446de27cf17be3c6fed64dab86cfe71404c05a7dac2e3ac65627c370a"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.891457 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5wtv6" podStartSLOduration=2.891436883 podStartE2EDuration="2.891436883s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:38.885355909 +0000 UTC m=+1258.769548625" watchObservedRunningTime="2026-02-02 10:59:38.891436883 +0000 UTC m=+1258.775629599" Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.061018 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.868204 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerStarted","Data":"8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce"} Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.868264 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerStarted","Data":"abce54fb83bfbed2484a3ffad62e6093bcdfe61f5c810c843b4fd77e933662cd"} Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.875484 4782 generic.go:334] "Generic (PLEG): container finished" podID="639e44fb-7faa-4907-b02e-8c985f846925" containerID="c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6" exitCode=0 Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.876230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerDied","Data":"c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6"} Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.898112 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" podStartSLOduration=1.89809129 podStartE2EDuration="1.89809129s" podCreationTimestamp="2026-02-02 10:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:39.893009344 +0000 UTC m=+1259.777202060" watchObservedRunningTime="2026-02-02 10:59:39.89809129 +0000 UTC m=+1259.782284006" Feb 02 10:59:40 crc kubenswrapper[4782]: I0202 10:59:40.477305 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:40 crc kubenswrapper[4782]: I0202 10:59:40.513048 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.920843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerStarted","Data":"9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.921503 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.923473 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerStarted","Data":"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.923621 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" gracePeriod=30 Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.926434 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerStarted","Data":"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.930020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerStarted","Data":"2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.930054 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerStarted","Data":"8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerStarted","Data":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerStarted","Data":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932240 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" containerID="cri-o://fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" gracePeriod=30 Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932524 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" containerID="cri-o://0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" gracePeriod=30 Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.954674 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" podStartSLOduration=5.954631819 podStartE2EDuration="5.954631819s" podCreationTimestamp="2026-02-02 10:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:42.946095714 +0000 UTC m=+1262.830288430" watchObservedRunningTime="2026-02-02 10:59:42.954631819 +0000 UTC m=+1262.838824535" Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.977744 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.486599406 podStartE2EDuration="6.977724001s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:38.292700228 +0000 UTC m=+1258.176892944" lastFinishedPulling="2026-02-02 10:59:41.783824823 +0000 UTC m=+1261.668017539" observedRunningTime="2026-02-02 10:59:42.970941377 +0000 UTC m=+1262.855134083" watchObservedRunningTime="2026-02-02 10:59:42.977724001 +0000 UTC m=+1262.861916717" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.023031 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.274196304 podStartE2EDuration="7.02300953s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:38.034151362 +0000 UTC m=+1257.918344078" lastFinishedPulling="2026-02-02 10:59:41.782964588 +0000 UTC m=+1261.667157304" observedRunningTime="2026-02-02 10:59:43.003933643 +0000 UTC m=+1262.888126359" watchObservedRunningTime="2026-02-02 10:59:43.02300953 +0000 UTC m=+1262.907202246" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.028986 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.6753411099999997 podStartE2EDuration="7.028966051s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:38.432248041 +0000 UTC m=+1258.316440747" lastFinishedPulling="2026-02-02 10:59:41.785872972 +0000 UTC m=+1261.670065688" observedRunningTime="2026-02-02 10:59:43.019839689 +0000 UTC m=+1262.904032405" watchObservedRunningTime="2026-02-02 10:59:43.028966051 +0000 UTC m=+1262.913158767" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.108913 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.298192733 podStartE2EDuration="7.108888764s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:37.967089279 +0000 UTC m=+1257.851281995" lastFinishedPulling="2026-02-02 10:59:41.77778531 +0000 UTC m=+1261.661978026" observedRunningTime="2026-02-02 10:59:43.093700648 +0000 UTC m=+1262.977893354" watchObservedRunningTime="2026-02-02 10:59:43.108888764 +0000 UTC m=+1262.993081480" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.878844 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.902964 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903010 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903077 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903195 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903717 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs" (OuterVolumeSpecName: "logs") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.904700 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.918291 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf" (OuterVolumeSpecName: "kube-api-access-ntjpf") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "kube-api-access-ntjpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.947581 4782 generic.go:334] "Generic (PLEG): container finished" podID="113b7e86-63fb-403b-a297-14a38039065c" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" exitCode=0 Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.948608 4782 generic.go:334] "Generic (PLEG): container finished" podID="113b7e86-63fb-403b-a297-14a38039065c" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" exitCode=143 Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.948329 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.948167 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerDied","Data":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.950077 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerDied","Data":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.950101 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerDied","Data":"0204128d8c9334e74821e1d19a2b03e4db6f2b1212375a7ff2103159f638b687"} Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.950132 4782 scope.go:117] "RemoveContainer" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.968837 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data" (OuterVolumeSpecName: "config-data") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.011232 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.011271 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.029755 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.090220 4782 scope.go:117] "RemoveContainer" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.112904 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.113590 4782 scope.go:117] "RemoveContainer" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.114074 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": container with ID starting with 0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef not found: ID does not exist" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114117 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} err="failed to get container status \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": rpc error: code = NotFound desc = could not find container \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": container with ID starting with 0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114142 4782 scope.go:117] "RemoveContainer" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.114422 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": container with ID starting with fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879 not found: ID does not exist" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114443 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} err="failed to get container status \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": rpc error: code = NotFound desc = could not find container \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": container with ID starting with fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879 not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114455 4782 scope.go:117] "RemoveContainer" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114837 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} err="failed to get container status \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": rpc error: code = NotFound desc = could not find container \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": container with ID starting with 0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114858 4782 scope.go:117] "RemoveContainer" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.115126 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} err="failed to get container status \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": rpc error: code = NotFound desc = could not find container \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": container with ID starting with fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879 not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.281781 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.291606 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.314827 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.315325 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315359 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.315379 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315385 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315824 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315850 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.317849 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.320910 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.321027 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.388167 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419383 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419516 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419579 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419768 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521722 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521814 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.522754 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.526217 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.526406 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.527111 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.545146 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.639445 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.833598 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113b7e86-63fb-403b-a297-14a38039065c" path="/var/lib/kubelet/pods/113b7e86-63fb-403b-a297-14a38039065c/volumes" Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.204068 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.975422 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerStarted","Data":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.975796 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerStarted","Data":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.975808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerStarted","Data":"ea6a6c08bd3087f1ba9f590b152caf3e7fec76c7a3cd69736de0e388760f5410"} Feb 02 10:59:46 crc kubenswrapper[4782]: I0202 10:59:46.008669 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.008638425 podStartE2EDuration="2.008638425s" podCreationTimestamp="2026-02-02 10:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:45.997961909 +0000 UTC m=+1265.882154615" watchObservedRunningTime="2026-02-02 10:59:46.008638425 +0000 UTC m=+1265.892831141" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.159249 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.159320 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.195960 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.368301 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.368362 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.453121 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.590789 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.662113 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.662379 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" containerID="cri-o://e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de" gracePeriod=10 Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.015098 4782 generic.go:334] "Generic (PLEG): container finished" podID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerID="730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb" exitCode=0 Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.015210 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerDied","Data":"730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb"} Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.022186 4782 generic.go:334] "Generic (PLEG): container finished" podID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerID="e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de" exitCode=0 Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.023282 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerDied","Data":"e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de"} Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.082038 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.275290 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444497 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444604 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444768 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444797 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.453778 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.454111 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.477901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb" (OuterVolumeSpecName: "kube-api-access-cxbzb") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "kube-api-access-cxbzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.519273 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.520139 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config" (OuterVolumeSpecName: "config") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.542117 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548844 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548913 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548926 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548937 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.556158 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.650893 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.041101 4782 generic.go:334] "Generic (PLEG): container finished" podID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerID="8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce" exitCode=0 Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.041193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerDied","Data":"8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce"} Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.047701 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.055357 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerDied","Data":"8c21bb8034d9faf7eb546bc39d481d9fb7112330466d208d373ff1d4cfc5503c"} Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.055419 4782 scope.go:117] "RemoveContainer" containerID="e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.084699 4782 scope.go:117] "RemoveContainer" containerID="235f00f5818c6de4755dcefb6a2d4359499a9277fdd4c9df3ba1b496dc87e676" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.117152 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.134419 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.450151 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591324 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591567 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591999 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.598711 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts" (OuterVolumeSpecName: "scripts") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.598846 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8" (OuterVolumeSpecName: "kube-api-access-tp6t8") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "kube-api-access-tp6t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.628564 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.633134 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data" (OuterVolumeSpecName: "config-data") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.640421 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.643464 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696072 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696388 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696474 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696544 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.056836 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.057016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerDied","Data":"7dccd6a446de27cf17be3c6fed64dab86cfe71404c05a7dac2e3ac65627c370a"} Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.057541 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dccd6a446de27cf17be3c6fed64dab86cfe71404c05a7dac2e3ac65627c370a" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.260821 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.261073 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" containerID="cri-o://8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066" gracePeriod=30 Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.261475 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" containerID="cri-o://2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744" gracePeriod=30 Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.262368 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.262548 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" containerID="cri-o://d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" gracePeriod=30 Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.306375 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.448591 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610693 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610766 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610866 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.619875 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64" (OuterVolumeSpecName: "kube-api-access-9mc64") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "kube-api-access-9mc64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.633749 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts" (OuterVolumeSpecName: "scripts") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.647589 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data" (OuterVolumeSpecName: "config-data") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.663104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712925 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712970 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712984 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712997 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.834728 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" path="/var/lib/kubelet/pods/aeac5df4-fc17-4840-b777-4b20a71f603b/volumes" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.067969 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerDied","Data":"abce54fb83bfbed2484a3ffad62e6093bcdfe61f5c810c843b4fd77e933662cd"} Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.068008 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abce54fb83bfbed2484a3ffad62e6093bcdfe61f5c810c843b4fd77e933662cd" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.068078 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.070718 4782 generic.go:334] "Generic (PLEG): container finished" podID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerID="8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066" exitCode=143 Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.070815 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerDied","Data":"8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066"} Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.070901 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" containerID="cri-o://5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" gracePeriod=30 Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.071102 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" containerID="cri-o://9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" gracePeriod=30 Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.158996 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159334 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159349 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159360 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="init" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159367 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="init" Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159384 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerName="nova-manage" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159390 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerName="nova-manage" Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159401 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerName="nova-cell1-conductor-db-sync" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159407 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerName="nova-cell1-conductor-db-sync" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159584 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerName="nova-manage" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159605 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159616 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerName="nova-cell1-conductor-db-sync" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.160122 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.164292 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.176183 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.221195 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.323174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.323297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.323794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2pf\" (UniqueName: \"kubernetes.io/projected/c8598880-0557-414a-bbb1-b5d0cdce0738-kube-api-access-st2pf\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.424906 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2pf\" (UniqueName: \"kubernetes.io/projected/c8598880-0557-414a-bbb1-b5d0cdce0738-kube-api-access-st2pf\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.425007 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.425050 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.430485 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.451747 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2pf\" (UniqueName: \"kubernetes.io/projected/c8598880-0557-414a-bbb1-b5d0cdce0738-kube-api-access-st2pf\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.461433 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.477087 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.557179 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.633992 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634055 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634197 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634234 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.640133 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs" (OuterVolumeSpecName: "logs") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.641896 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5" (OuterVolumeSpecName: "kube-api-access-2nrw5") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "kube-api-access-2nrw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.680409 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.692306 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data" (OuterVolumeSpecName: "config-data") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.703191 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736750 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736785 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736798 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736806 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736815 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.999113 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.079059 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c8598880-0557-414a-bbb1-b5d0cdce0738","Type":"ContainerStarted","Data":"f91554cf092f64ca8b301e7cb53b71665640b08e814fdf84823f2b4944b60c90"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080635 4782 generic.go:334] "Generic (PLEG): container finished" podID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" exitCode=0 Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080680 4782 generic.go:334] "Generic (PLEG): container finished" podID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" exitCode=143 Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080701 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerDied","Data":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080726 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerDied","Data":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080736 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerDied","Data":"ea6a6c08bd3087f1ba9f590b152caf3e7fec76c7a3cd69736de0e388760f5410"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080751 4782 scope.go:117] "RemoveContainer" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080860 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.130450 4782 scope.go:117] "RemoveContainer" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.157879 4782 scope.go:117] "RemoveContainer" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.161170 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": container with ID starting with 9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191 not found: ID does not exist" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161218 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} err="failed to get container status \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": rpc error: code = NotFound desc = could not find container \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": container with ID starting with 9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161248 4782 scope.go:117] "RemoveContainer" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.161610 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": container with ID starting with 5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394 not found: ID does not exist" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161684 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} err="failed to get container status \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": rpc error: code = NotFound desc = could not find container \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": container with ID starting with 5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161751 4782 scope.go:117] "RemoveContainer" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.163586 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} err="failed to get container status \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": rpc error: code = NotFound desc = could not find container \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": container with ID starting with 9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.163621 4782 scope.go:117] "RemoveContainer" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.164016 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} err="failed to get container status \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": rpc error: code = NotFound desc = could not find container \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": container with ID starting with 5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.164608 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.167769 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.167863 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.169540 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.169619 4782 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.174850 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.202483 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.202933 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.202954 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.202980 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.202988 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.203196 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.203220 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.204366 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.207983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.208319 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.221247 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246047 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246101 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246161 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246198 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.347872 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.347984 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348013 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348100 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.352574 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.355549 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.355981 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.367319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.545416 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.833440 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" path="/var/lib/kubelet/pods/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a/volumes" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.951707 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.951777 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.951827 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.952478 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.952550 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0" gracePeriod=600 Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.977874 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.007825 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: i/o timeout" Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.100383 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0" exitCode=0 Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.100520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0"} Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.100794 4782 scope.go:117] "RemoveContainer" containerID="723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1" Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.103301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerStarted","Data":"b659fbd9e60731442fb339dcfdf8314f4d7167c7f486099f43c6dfe96912afd1"} Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.108902 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c8598880-0557-414a-bbb1-b5d0cdce0738","Type":"ContainerStarted","Data":"da6115dcfc2a1580809cd167f7aa760cd4435d3c231e18574a892ed5f9cab1c2"} Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.109962 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.136436 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.13641828 podStartE2EDuration="2.13641828s" podCreationTimestamp="2026-02-02 10:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:53.133175397 +0000 UTC m=+1273.017368113" watchObservedRunningTime="2026-02-02 10:59:53.13641828 +0000 UTC m=+1273.020610996" Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.118308 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerStarted","Data":"ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f"} Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.120014 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerStarted","Data":"12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140"} Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.120634 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca"} Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.147856 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.147838583 podStartE2EDuration="2.147838583s" podCreationTimestamp="2026-02-02 10:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:54.140907184 +0000 UTC m=+1274.025099900" watchObservedRunningTime="2026-02-02 10:59:54.147838583 +0000 UTC m=+1274.032031299" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.107135 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137457 4782 generic.go:334] "Generic (PLEG): container finished" podID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" exitCode=0 Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137877 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerDied","Data":"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3"} Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137905 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerDied","Data":"6984fb97283161100b3f2ea9d0020d436f7eec946eff63636e874e1d070b241f"} Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137921 4782 scope.go:117] "RemoveContainer" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.138024 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.168541 4782 generic.go:334] "Generic (PLEG): container finished" podID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerID="2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744" exitCode=0 Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.168580 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerDied","Data":"2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744"} Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.208667 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.208874 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" containerID="cri-o://74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" gracePeriod=30 Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.211846 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"e271d5c6-aeb4-4181-8712-3c80349c7900\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.212057 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"e271d5c6-aeb4-4181-8712-3c80349c7900\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.212094 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"e271d5c6-aeb4-4181-8712-3c80349c7900\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.229084 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.233727 4782 scope.go:117] "RemoveContainer" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.242974 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3\": container with ID starting with d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3 not found: ID does not exist" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.243033 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3"} err="failed to get container status \"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3\": rpc error: code = NotFound desc = could not find container \"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3\": container with ID starting with d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3 not found: ID does not exist" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.254974 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g" (OuterVolumeSpecName: "kube-api-access-hvf5g") pod "e271d5c6-aeb4-4181-8712-3c80349c7900" (UID: "e271d5c6-aeb4-4181-8712-3c80349c7900"). InnerVolumeSpecName "kube-api-access-hvf5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.269023 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e271d5c6-aeb4-4181-8712-3c80349c7900" (UID: "e271d5c6-aeb4-4181-8712-3c80349c7900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.281470 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data" (OuterVolumeSpecName: "config-data") pod "e271d5c6-aeb4-4181-8712-3c80349c7900" (UID: "e271d5c6-aeb4-4181-8712-3c80349c7900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.314398 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.314427 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.314436 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415376 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415454 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415619 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415743 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.416597 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs" (OuterVolumeSpecName: "logs") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.421015 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j" (OuterVolumeSpecName: "kube-api-access-blg7j") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "kube-api-access-blg7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.452735 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.458267 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data" (OuterVolumeSpecName: "config-data") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.490061 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.497486 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517320 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517706 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517732 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517744 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517754 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.518178 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518264 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.518336 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518409 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.518484 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518547 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518860 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518964 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.519054 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.519936 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.531929 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.542008 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.619293 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.619777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.619986 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.718511 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.722689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.722763 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.722827 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.728039 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.731132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.755327 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.824329 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.827832 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7" (OuterVolumeSpecName: "kube-api-access-b9vk7") pod "a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" (UID: "a1ccfccc-4ba0-4523-97ca-1d5b54034fd1"). InnerVolumeSpecName "kube-api-access-b9vk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.839193 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.926233 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177670 4782 generic.go:334] "Generic (PLEG): container finished" podID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" exitCode=2 Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177826 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerDied","Data":"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d"} Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177931 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerDied","Data":"b5731da46b9909f62f299535fa86ed29a8dd25ea43d89f4988425d732dfa7580"} Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177950 4782 scope.go:117] "RemoveContainer" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177853 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.189936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerDied","Data":"259595f6180ca19619e9584218a7595adeeee90339f24e0dc184cf1c1a9dd391"} Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.189973 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.216845 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.227408 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.233711 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.233876 4782 scope.go:117] "RemoveContainer" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" Feb 02 10:59:56 crc kubenswrapper[4782]: E0202 10:59:56.234357 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d\": container with ID starting with 74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d not found: ID does not exist" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.234401 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d"} err="failed to get container status \"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d\": rpc error: code = NotFound desc = could not find container \"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d\": container with ID starting with 74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d not found: ID does not exist" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.234425 4782 scope.go:117] "RemoveContainer" containerID="2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.243733 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.251348 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: E0202 10:59:56.251797 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.251817 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.251975 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.252616 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.256807 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.257019 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.269405 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.271190 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.275400 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.277174 4782 scope.go:117] "RemoveContainer" containerID="8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.278759 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.289164 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.377051 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435585 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435679 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435715 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435740 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mbk2\" (UniqueName: \"kubernetes.io/projected/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-api-access-9mbk2\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435781 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435811 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435891 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435914 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537362 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537582 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537720 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537755 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537777 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537796 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mbk2\" (UniqueName: \"kubernetes.io/projected/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-api-access-9mbk2\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.538502 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.540964 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.557302 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.558911 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.559420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.560942 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.566997 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mbk2\" (UniqueName: \"kubernetes.io/projected/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-api-access-9mbk2\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.572601 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.583746 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.592683 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.839106 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" path="/var/lib/kubelet/pods/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1/volumes" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.839713 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" path="/var/lib/kubelet/pods/e271d5c6-aeb4-4181-8712-3c80349c7900/volumes" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.840288 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" path="/var/lib/kubelet/pods/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9/volumes" Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.157626 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158261 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" containerID="cri-o://0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158769 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" containerID="cri-o://f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158832 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" containerID="cri-o://548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158891 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" containerID="cri-o://c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.200918 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerStarted","Data":"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5"} Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.200962 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerStarted","Data":"499d89b4d3bd8290ea6e83963b172b2e55234950c9bf0978caba78dd300a35cd"} Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.237940 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.237922454 podStartE2EDuration="2.237922454s" podCreationTimestamp="2026-02-02 10:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:57.221172844 +0000 UTC m=+1277.105365560" watchObservedRunningTime="2026-02-02 10:59:57.237922454 +0000 UTC m=+1277.122115170" Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.249178 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.307441 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:57 crc kubenswrapper[4782]: W0202 10:59:57.308210 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6124b52e_8e75_46f7_a40a_a106f60f15be.slice/crio-5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76 WatchSource:0}: Error finding container 5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76: Status 404 returned error can't find the container with id 5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.545782 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.545951 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.218676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6953ab25-8ddb-4ab3-b006-116f6ad534db","Type":"ContainerStarted","Data":"e5a85f4fdafa3c8a5d69890175c598030e16e7cc48602682b5c858af09a16882"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.218903 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6953ab25-8ddb-4ab3-b006-116f6ad534db","Type":"ContainerStarted","Data":"eac7cf780904993861e9404cd049d1f2b95033b7eaf22713161aed6c1b5e7078"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.220218 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.226326 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerStarted","Data":"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.226365 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerStarted","Data":"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.226378 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerStarted","Data":"5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233254 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98" exitCode=0 Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233290 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9" exitCode=2 Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233304 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff" exitCode=0 Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233815 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233876 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233890 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.265843 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.26582332 podStartE2EDuration="2.26582332s" podCreationTimestamp="2026-02-02 10:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:58.261450275 +0000 UTC m=+1278.145642991" watchObservedRunningTime="2026-02-02 10:59:58.26582332 +0000 UTC m=+1278.150016036" Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.268536 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.890819813 podStartE2EDuration="2.268520268s" podCreationTimestamp="2026-02-02 10:59:56 +0000 UTC" firstStartedPulling="2026-02-02 10:59:57.246338315 +0000 UTC m=+1277.130531031" lastFinishedPulling="2026-02-02 10:59:57.62403877 +0000 UTC m=+1277.508231486" observedRunningTime="2026-02-02 10:59:58.243010486 +0000 UTC m=+1278.127203222" watchObservedRunningTime="2026-02-02 10:59:58.268520268 +0000 UTC m=+1278.152712984" Feb 02 11:00:00 crc kubenswrapper[4782]: E0202 11:00:00.106427 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28015087_432c_4906_8c57_406f5bf4371b.slice/crio-c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.147712 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.149025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.165836 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.165836 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.184585 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.204513 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.204656 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.204696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.252379 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71" exitCode=0 Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.252449 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71"} Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.307875 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.308492 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.308530 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.309922 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.315250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.328116 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.494988 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.539854 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613251 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613705 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613732 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613797 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613879 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613919 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.614660 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.614714 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.615518 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.616592 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.620166 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn" (OuterVolumeSpecName: "kube-api-access-4cksn") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "kube-api-access-4cksn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.622126 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts" (OuterVolumeSpecName: "scripts") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.660671 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719478 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719516 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719527 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719537 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719900 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data" (OuterVolumeSpecName: "config-data") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.739292 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.820709 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.820744 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.840315 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.046173 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:00:01 crc kubenswrapper[4782]: W0202 11:00:01.058978 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ac376d_42fd_424f_a1bf_281bd9c9d31f.slice/crio-f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f WatchSource:0}: Error finding container f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f: Status 404 returned error can't find the container with id f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.264542 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248"} Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.264910 4782 scope.go:117] "RemoveContainer" containerID="f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.264564 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.269387 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerStarted","Data":"a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634"} Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.269428 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerStarted","Data":"f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f"} Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.288920 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.295721 4782 scope.go:117] "RemoveContainer" containerID="548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.299102 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.314630 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" podStartSLOduration=1.314608197 podStartE2EDuration="1.314608197s" podCreationTimestamp="2026-02-02 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:01.306496514 +0000 UTC m=+1281.190689230" watchObservedRunningTime="2026-02-02 11:00:01.314608197 +0000 UTC m=+1281.198800933" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336189 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336661 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336686 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336718 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336726 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336739 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336747 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336763 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336771 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336976 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.337000 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.337020 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.337030 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.338956 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.339611 4782 scope.go:117] "RemoveContainer" containerID="c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.342313 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.342427 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.342573 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.351257 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.368931 4782 scope.go:117] "RemoveContainer" containerID="0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.513702 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533604 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533664 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533705 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533730 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533795 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533847 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.635166 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.637153 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.637828 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.637987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.638424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.638796 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.639254 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.639835 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.639978 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.640354 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.646744 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.647221 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.647800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.656581 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.659265 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.659364 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.664546 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:02 crc kubenswrapper[4782]: W0202 11:00:02.133891 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65da50be_2bcd_4dad_aaaf_cfa5587e7544.slice/crio-1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5 WatchSource:0}: Error finding container 1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5: Status 404 returned error can't find the container with id 1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5 Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.134635 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.280404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5"} Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.282559 4782 generic.go:334] "Generic (PLEG): container finished" podID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerID="a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634" exitCode=0 Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.282596 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerDied","Data":"a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634"} Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.545605 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.545657 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.833395 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28015087-432c-4906-8c57-406f5bf4371b" path="/var/lib/kubelet/pods/28015087-432c-4906-8c57-406f5bf4371b/volumes" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.309327 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d"} Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.563825 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.564023 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.793936 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.984749 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.985257 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.985440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.987352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume" (OuterVolumeSpecName: "config-volume") pod "62ac376d-42fd-424f-a1bf-281bd9c9d31f" (UID: "62ac376d-42fd-424f-a1bf-281bd9c9d31f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.997873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr" (OuterVolumeSpecName: "kube-api-access-mvwxr") pod "62ac376d-42fd-424f-a1bf-281bd9c9d31f" (UID: "62ac376d-42fd-424f-a1bf-281bd9c9d31f"). InnerVolumeSpecName "kube-api-access-mvwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.000672 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62ac376d-42fd-424f-a1bf-281bd9c9d31f" (UID: "62ac376d-42fd-424f-a1bf-281bd9c9d31f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.087908 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.087951 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.087965 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.319051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e"} Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.321467 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerDied","Data":"f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f"} Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.321512 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.321524 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:05 crc kubenswrapper[4782]: I0202 11:00:05.331835 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75"} Feb 02 11:00:05 crc kubenswrapper[4782]: I0202 11:00:05.847750 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 11:00:05 crc kubenswrapper[4782]: I0202 11:00:05.873801 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.374456 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.584400 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.593284 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.593355 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:07 crc kubenswrapper[4782]: I0202 11:00:07.675853 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:07 crc kubenswrapper[4782]: I0202 11:00:07.675911 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:08 crc kubenswrapper[4782]: I0202 11:00:08.400789 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01"} Feb 02 11:00:08 crc kubenswrapper[4782]: I0202 11:00:08.402561 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:00:08 crc kubenswrapper[4782]: I0202 11:00:08.427137 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.276930978 podStartE2EDuration="7.427100422s" podCreationTimestamp="2026-02-02 11:00:01 +0000 UTC" firstStartedPulling="2026-02-02 11:00:02.135566927 +0000 UTC m=+1282.019759633" lastFinishedPulling="2026-02-02 11:00:07.285736361 +0000 UTC m=+1287.169929077" observedRunningTime="2026-02-02 11:00:08.424707623 +0000 UTC m=+1288.308900349" watchObservedRunningTime="2026-02-02 11:00:08.427100422 +0000 UTC m=+1288.311293138" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.554381 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.555133 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.560507 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.561134 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.327660 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.449867 4782 generic.go:334] "Generic (PLEG): container finished" podID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" exitCode=137 Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.449943 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.449986 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerDied","Data":"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9"} Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.450014 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerDied","Data":"b7dc98ebc02a669721b0d5719711fa47b0524703f51ae30735f586363eb204e3"} Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.450030 4782 scope.go:117] "RemoveContainer" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.466949 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"042e7186-1c2e-4a12-b06e-4f99a5d78083\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.467072 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"042e7186-1c2e-4a12-b06e-4f99a5d78083\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.467291 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"042e7186-1c2e-4a12-b06e-4f99a5d78083\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.485977 4782 scope.go:117] "RemoveContainer" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.486104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq" (OuterVolumeSpecName: "kube-api-access-cm6xq") pod "042e7186-1c2e-4a12-b06e-4f99a5d78083" (UID: "042e7186-1c2e-4a12-b06e-4f99a5d78083"). InnerVolumeSpecName "kube-api-access-cm6xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:13 crc kubenswrapper[4782]: E0202 11:00:13.487098 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9\": container with ID starting with 098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9 not found: ID does not exist" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.487143 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9"} err="failed to get container status \"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9\": rpc error: code = NotFound desc = could not find container \"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9\": container with ID starting with 098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9 not found: ID does not exist" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.493673 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "042e7186-1c2e-4a12-b06e-4f99a5d78083" (UID: "042e7186-1c2e-4a12-b06e-4f99a5d78083"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.505250 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data" (OuterVolumeSpecName: "config-data") pod "042e7186-1c2e-4a12-b06e-4f99a5d78083" (UID: "042e7186-1c2e-4a12-b06e-4f99a5d78083"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.570110 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.570144 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.570157 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.794446 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.805433 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.815167 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: E0202 11:00:13.815912 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerName="collect-profiles" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816014 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerName="collect-profiles" Feb 02 11:00:13 crc kubenswrapper[4782]: E0202 11:00:13.816119 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816197 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816457 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816560 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerName="collect-profiles" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.818848 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.822532 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.822702 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.822917 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.829528 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976415 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976474 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlm6\" (UniqueName: \"kubernetes.io/projected/16441e1e-4564-492e-bdce-40eb2652687a-kube-api-access-6nlm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976740 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976768 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.077886 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078160 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078215 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nlm6\" (UniqueName: \"kubernetes.io/projected/16441e1e-4564-492e-bdce-40eb2652687a-kube-api-access-6nlm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.082250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.082814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.083265 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.084497 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.100074 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nlm6\" (UniqueName: \"kubernetes.io/projected/16441e1e-4564-492e-bdce-40eb2652687a-kube-api-access-6nlm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.135053 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.561816 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:14 crc kubenswrapper[4782]: W0202 11:00:14.568179 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16441e1e_4564_492e_bdce_40eb2652687a.slice/crio-927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47 WatchSource:0}: Error finding container 927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47: Status 404 returned error can't find the container with id 927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47 Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.832899 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" path="/var/lib/kubelet/pods/042e7186-1c2e-4a12-b06e-4f99a5d78083/volumes" Feb 02 11:00:15 crc kubenswrapper[4782]: I0202 11:00:15.471112 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16441e1e-4564-492e-bdce-40eb2652687a","Type":"ContainerStarted","Data":"4c086beb1c634c338884796d9471267dcb8c3f5454b940b761a4fd63a110a06b"} Feb 02 11:00:15 crc kubenswrapper[4782]: I0202 11:00:15.471382 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16441e1e-4564-492e-bdce-40eb2652687a","Type":"ContainerStarted","Data":"927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47"} Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.598883 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.599572 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.600325 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.600581 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.610334 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.614056 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.619809 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.619785215 podStartE2EDuration="3.619785215s" podCreationTimestamp="2026-02-02 11:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:15.501965309 +0000 UTC m=+1295.386158025" watchObservedRunningTime="2026-02-02 11:00:16.619785215 +0000 UTC m=+1296.503977931" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.856581 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.858768 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.929253 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.939833 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.939942 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.940151 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.940420 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.987739 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042367 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042515 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042632 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.043711 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.044307 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.044477 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.045002 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.064800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.207764 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.809953 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:00:18 crc kubenswrapper[4782]: I0202 11:00:18.506166 4782 generic.go:334] "Generic (PLEG): container finished" podID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerID="3632ebdb9d373630e077154436a2fd0455ce319004676a7d22bf4fd22d09ccf1" exitCode=0 Feb 02 11:00:18 crc kubenswrapper[4782]: I0202 11:00:18.506334 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerDied","Data":"3632ebdb9d373630e077154436a2fd0455ce319004676a7d22bf4fd22d09ccf1"} Feb 02 11:00:18 crc kubenswrapper[4782]: I0202 11:00:18.507132 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerStarted","Data":"cd107aafb4197d3a9b8dcab601301a7574e5c0bd0413b81852d1995de35f6645"} Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.135191 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.498000 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.516694 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerStarted","Data":"9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d"} Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.516825 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" containerID="cri-o://f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.516867 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" containerID="cri-o://586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.545385 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" podStartSLOduration=3.545362877 podStartE2EDuration="3.545362877s" podCreationTimestamp="2026-02-02 11:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:19.539004755 +0000 UTC m=+1299.423197471" watchObservedRunningTime="2026-02-02 11:00:19.545362877 +0000 UTC m=+1299.429555603" Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843205 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843549 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" containerID="cri-o://0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843595 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" containerID="cri-o://545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843598 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" containerID="cri-o://871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843704 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" containerID="cri-o://2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.857745 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.181:3000/\": EOF" Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530218 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" exitCode=0 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530612 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" exitCode=2 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530625 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" exitCode=0 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530718 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530734 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.533743 4782 generic.go:334] "Generic (PLEG): container finished" podID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" exitCode=143 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.533787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerDied","Data":"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.534030 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.931223 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013227 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013314 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013357 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013391 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013469 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013504 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013659 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013657 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.017546 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.031877 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts" (OuterVolumeSpecName: "scripts") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.037175 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs" (OuterVolumeSpecName: "kube-api-access-pgncs") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "kube-api-access-pgncs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.102762 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116100 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116138 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116150 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116160 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116171 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.131092 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.195878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.214229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data" (OuterVolumeSpecName: "config-data") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.217089 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.217115 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.217125 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.548860 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" exitCode=0 Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e"} Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5"} Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549932 4782 scope.go:117] "RemoveContainer" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549948 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.575332 4782 scope.go:117] "RemoveContainer" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.593745 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.615325 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.621911 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622305 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622324 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622335 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622348 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622378 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622387 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622403 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622412 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622605 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622623 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622634 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622666 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.624450 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.630575 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.632671 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.632891 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.638494 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.639077 4782 scope.go:117] "RemoveContainer" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.674129 4782 scope.go:117] "RemoveContainer" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.698505 4782 scope.go:117] "RemoveContainer" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.699046 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01\": container with ID starting with 2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01 not found: ID does not exist" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699085 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01"} err="failed to get container status \"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01\": rpc error: code = NotFound desc = could not find container \"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01\": container with ID starting with 2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01 not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699112 4782 scope.go:117] "RemoveContainer" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.699832 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75\": container with ID starting with 871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75 not found: ID does not exist" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699862 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75"} err="failed to get container status \"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75\": rpc error: code = NotFound desc = could not find container \"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75\": container with ID starting with 871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75 not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699881 4782 scope.go:117] "RemoveContainer" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.700292 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e\": container with ID starting with 545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e not found: ID does not exist" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.700319 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e"} err="failed to get container status \"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e\": rpc error: code = NotFound desc = could not find container \"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e\": container with ID starting with 545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.700332 4782 scope.go:117] "RemoveContainer" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.700819 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d\": container with ID starting with 0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d not found: ID does not exist" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.700847 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d"} err="failed to get container status \"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d\": rpc error: code = NotFound desc = could not find container \"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d\": container with ID starting with 0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.729696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.729941 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.729974 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730031 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730127 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730315 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730439 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831131 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831282 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831318 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831353 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831380 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831415 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.832381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.832619 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.836429 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.837138 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.837225 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.838831 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.849766 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.861291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.960100 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.426720 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:22 crc kubenswrapper[4782]: W0202 11:00:22.435916 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod497f3642_7f3b_417c_aa52_2ed3ddbcac75.slice/crio-64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591 WatchSource:0}: Error finding container 64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591: Status 404 returned error can't find the container with id 64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591 Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.444213 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.561200 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591"} Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.831523 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" path="/var/lib/kubelet/pods/65da50be-2bcd-4dad-aaaf-cfa5587e7544/volumes" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.113439 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270500 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270758 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270812 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.271169 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs" (OuterVolumeSpecName: "logs") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.271578 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.285920 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd" (OuterVolumeSpecName: "kube-api-access-ttgnd") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "kube-api-access-ttgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.309751 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data" (OuterVolumeSpecName: "config-data") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.323980 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.380177 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.380220 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.380230 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571057 4782 generic.go:334] "Generic (PLEG): container finished" podID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" exitCode=0 Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571122 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerDied","Data":"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5"} Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571146 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerDied","Data":"5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76"} Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571164 4782 scope.go:117] "RemoveContainer" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571186 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.577198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607"} Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.606134 4782 scope.go:117] "RemoveContainer" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.614719 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.626475 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.637754 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638069 4782 scope.go:117] "RemoveContainer" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.638229 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638249 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.638285 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638293 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638499 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638520 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.639651 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.644056 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5\": container with ID starting with 586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5 not found: ID does not exist" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.644116 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5"} err="failed to get container status \"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5\": rpc error: code = NotFound desc = could not find container \"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5\": container with ID starting with 586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5 not found: ID does not exist" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.644147 4782 scope.go:117] "RemoveContainer" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.645308 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4\": container with ID starting with f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4 not found: ID does not exist" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.645350 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4"} err="failed to get container status \"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4\": rpc error: code = NotFound desc = could not find container \"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4\": container with ID starting with f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4 not found: ID does not exist" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.650173 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.650523 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.651140 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.653770 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687073 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687465 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687499 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687520 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.788691 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.788943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789214 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789347 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789484 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.791310 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.793929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.795066 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.797633 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.797741 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.806457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.985025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.135761 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.170788 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.542216 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:24 crc kubenswrapper[4782]: W0202 11:00:24.545861 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1654143f_6a4d_400a_9879_aeddb7807563.slice/crio-1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c WatchSource:0}: Error finding container 1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c: Status 404 returned error can't find the container with id 1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.587142 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerStarted","Data":"1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c"} Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.588948 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47"} Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.588971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc"} Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.612939 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.804776 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.806168 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.810983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.811834 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.819134 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.851943 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852028 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852052 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852813 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" path="/var/lib/kubelet/pods/6124b52e-8e75-46f7-a40a-a106f60f15be/volumes" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.953995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.954068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.954092 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.954161 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.958918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.959583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.973303 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.983471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.158112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.621338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerStarted","Data":"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e"} Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.621697 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerStarted","Data":"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead"} Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.652748 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.652726151 podStartE2EDuration="2.652726151s" podCreationTimestamp="2026-02-02 11:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:25.649830467 +0000 UTC m=+1305.534023193" watchObservedRunningTime="2026-02-02 11:00:25.652726151 +0000 UTC m=+1305.536918867" Feb 02 11:00:25 crc kubenswrapper[4782]: W0202 11:00:25.666225 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd921bd77_679d_4722_8238_a75dc4f3b6b5.slice/crio-84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442 WatchSource:0}: Error finding container 84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442: Status 404 returned error can't find the container with id 84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442 Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.669585 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:00:26 crc kubenswrapper[4782]: I0202 11:00:26.631672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerStarted","Data":"31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f"} Feb 02 11:00:26 crc kubenswrapper[4782]: I0202 11:00:26.633219 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerStarted","Data":"84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442"} Feb 02 11:00:26 crc kubenswrapper[4782]: I0202 11:00:26.657008 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lxwch" podStartSLOduration=2.656992169 podStartE2EDuration="2.656992169s" podCreationTimestamp="2026-02-02 11:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:26.649946627 +0000 UTC m=+1306.534139343" watchObservedRunningTime="2026-02-02 11:00:26.656992169 +0000 UTC m=+1306.541184885" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.209215 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.269184 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.269455 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" containerID="cri-o://9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544" gracePeriod=10 Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.651773 4782 generic.go:334] "Generic (PLEG): container finished" podID="639e44fb-7faa-4907-b02e-8c985f846925" containerID="9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544" exitCode=0 Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.651877 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerDied","Data":"9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544"} Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.661963 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976"} Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.662053 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.700082 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.215083405 podStartE2EDuration="6.70006176s" podCreationTimestamp="2026-02-02 11:00:21 +0000 UTC" firstStartedPulling="2026-02-02 11:00:22.443962325 +0000 UTC m=+1302.328155041" lastFinishedPulling="2026-02-02 11:00:26.92894068 +0000 UTC m=+1306.813133396" observedRunningTime="2026-02-02 11:00:27.695307974 +0000 UTC m=+1307.579500700" watchObservedRunningTime="2026-02-02 11:00:27.70006176 +0000 UTC m=+1307.584254476" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.855196 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016738 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016908 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016947 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.023873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz" (OuterVolumeSpecName: "kube-api-access-5rmlz") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "kube-api-access-5rmlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.084503 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.098182 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config" (OuterVolumeSpecName: "config") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.104603 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.111697 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120023 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120056 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120066 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120076 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120086 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.672068 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.672338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerDied","Data":"40e8da6bacf81b0807d18f0e00cf0e73a4f50618c9435b4a018769c28384c37e"} Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.673238 4782 scope.go:117] "RemoveContainer" containerID="9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.694911 4782 scope.go:117] "RemoveContainer" containerID="c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.712169 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.727859 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.831017 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639e44fb-7faa-4907-b02e-8c985f846925" path="/var/lib/kubelet/pods/639e44fb-7faa-4907-b02e-8c985f846925/volumes" Feb 02 11:00:31 crc kubenswrapper[4782]: I0202 11:00:31.712276 4782 generic.go:334] "Generic (PLEG): container finished" podID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerID="31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f" exitCode=0 Feb 02 11:00:31 crc kubenswrapper[4782]: I0202 11:00:31.712316 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerDied","Data":"31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f"} Feb 02 11:00:32 crc kubenswrapper[4782]: I0202 11:00:32.591019 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.075807 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.241768 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.242555 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.242604 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.242731 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.247873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v" (OuterVolumeSpecName: "kube-api-access-6r28v") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "kube-api-access-6r28v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.248174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts" (OuterVolumeSpecName: "scripts") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.267808 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data" (OuterVolumeSpecName: "config-data") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.278354 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345359 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345399 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345412 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345425 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.735368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerDied","Data":"84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442"} Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.735408 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.735477 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.986178 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.986304 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.013475 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.027364 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.027634 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" containerID="cri-o://78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.149663 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.150048 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" containerID="cri-o://12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.150331 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" containerID="cri-o://ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.744752 4782 generic.go:334] "Generic (PLEG): container finished" podID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerID="12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140" exitCode=143 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.744831 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerDied","Data":"12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140"} Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.745273 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" containerID="cri-o://d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.745337 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" containerID="cri-o://d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.750829 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": EOF" Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.750849 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": EOF" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.577114 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.688897 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.689074 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.689134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.698857 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r" (OuterVolumeSpecName: "kube-api-access-wg44r") pod "feecb35c-d2a4-4c9b-8f39-8145f39b332c" (UID: "feecb35c-d2a4-4c9b-8f39-8145f39b332c"). InnerVolumeSpecName "kube-api-access-wg44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.723911 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feecb35c-d2a4-4c9b-8f39-8145f39b332c" (UID: "feecb35c-d2a4-4c9b-8f39-8145f39b332c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.729820 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data" (OuterVolumeSpecName: "config-data") pod "feecb35c-d2a4-4c9b-8f39-8145f39b332c" (UID: "feecb35c-d2a4-4c9b-8f39-8145f39b332c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769026 4782 generic.go:334] "Generic (PLEG): container finished" podID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" exitCode=0 Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769359 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769462 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerDied","Data":"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5"} Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769543 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerDied","Data":"499d89b4d3bd8290ea6e83963b172b2e55234950c9bf0978caba78dd300a35cd"} Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769567 4782 scope.go:117] "RemoveContainer" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.787611 4782 generic.go:334] "Generic (PLEG): container finished" podID="1654143f-6a4d-400a-9879-aeddb7807563" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" exitCode=143 Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.787706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerDied","Data":"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead"} Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.790837 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.790869 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.790879 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.824366 4782 scope.go:117] "RemoveContainer" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.832949 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.833159 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5\": container with ID starting with 78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5 not found: ID does not exist" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.833211 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5"} err="failed to get container status \"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5\": rpc error: code = NotFound desc = could not find container \"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5\": container with ID starting with 78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5 not found: ID does not exist" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.841269 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.854456 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.854994 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerName="nova-manage" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.855058 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerName="nova-manage" Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.855150 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.855214 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.855275 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.855325 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.855379 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="init" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856135 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="init" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856425 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856517 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerName="nova-manage" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856582 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.857233 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.859513 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.891976 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.994724 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2np\" (UniqueName: \"kubernetes.io/projected/47aff64c-0afc-4b3c-9e90-cbe926943170-kube-api-access-8b2np\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.995797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-config-data\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.995998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.098043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-config-data\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.098312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.098485 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2np\" (UniqueName: \"kubernetes.io/projected/47aff64c-0afc-4b3c-9e90-cbe926943170-kube-api-access-8b2np\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.103147 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-config-data\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.105271 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.116699 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2np\" (UniqueName: \"kubernetes.io/projected/47aff64c-0afc-4b3c-9e90-cbe926943170-kube-api-access-8b2np\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.180813 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.649442 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.798737 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47aff64c-0afc-4b3c-9e90-cbe926943170","Type":"ContainerStarted","Data":"981442e0b581c5dd5c71520353e24009e588d100fa7e01f322a93b8d6454d0c0"} Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.833892 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" path="/var/lib/kubelet/pods/feecb35c-d2a4-4c9b-8f39-8145f39b332c/volumes" Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.570125 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": read tcp 10.217.0.2:32898->10.217.0.176:8775: read: connection reset by peer" Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.570155 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": read tcp 10.217.0.2:32896->10.217.0.176:8775: read: connection reset by peer" Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.810175 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47aff64c-0afc-4b3c-9e90-cbe926943170","Type":"ContainerStarted","Data":"1af5ad0efe11469a83d7e0d5e41c5bd015d9f9c013f5fb5fd6fda8e2cb52b798"} Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.815416 4782 generic.go:334] "Generic (PLEG): container finished" podID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerID="ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f" exitCode=0 Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.815454 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerDied","Data":"ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f"} Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.829072 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.829055277 podStartE2EDuration="2.829055277s" podCreationTimestamp="2026-02-02 11:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:37.826839863 +0000 UTC m=+1317.711032579" watchObservedRunningTime="2026-02-02 11:00:37.829055277 +0000 UTC m=+1317.713247993" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.035367 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.136846 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.136998 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137063 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137116 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137153 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137458 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs" (OuterVolumeSpecName: "logs") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137814 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.147328 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b" (OuterVolumeSpecName: "kube-api-access-wvq2b") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "kube-api-access-wvq2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.184718 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.185339 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data" (OuterVolumeSpecName: "config-data") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.213836 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240056 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240095 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240109 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240120 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.825704 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.831767 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerDied","Data":"b659fbd9e60731442fb339dcfdf8314f4d7167c7f486099f43c6dfe96912afd1"} Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.831814 4782 scope.go:117] "RemoveContainer" containerID="ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.866324 4782 scope.go:117] "RemoveContainer" containerID="12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.883100 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.904044 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.912915 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:38 crc kubenswrapper[4782]: E0202 11:00:38.913400 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913417 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" Feb 02 11:00:38 crc kubenswrapper[4782]: E0202 11:00:38.913429 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913437 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913597 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913615 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.914663 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.917078 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.917251 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.936477 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.055908 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056319 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbrmz\" (UniqueName: \"kubernetes.io/projected/ffbaaa30-f515-494a-94af-a7a83fb44ada-kube-api-access-tbrmz\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056365 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-config-data\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056548 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffbaaa30-f515-494a-94af-a7a83fb44ada-logs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158246 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158326 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbrmz\" (UniqueName: \"kubernetes.io/projected/ffbaaa30-f515-494a-94af-a7a83fb44ada-kube-api-access-tbrmz\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-config-data\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158384 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158462 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffbaaa30-f515-494a-94af-a7a83fb44ada-logs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffbaaa30-f515-494a-94af-a7a83fb44ada-logs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.162718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.175355 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-config-data\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.177165 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.185294 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbrmz\" (UniqueName: \"kubernetes.io/projected/ffbaaa30-f515-494a-94af-a7a83fb44ada-kube-api-access-tbrmz\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.236077 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.675213 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.840897 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffbaaa30-f515-494a-94af-a7a83fb44ada","Type":"ContainerStarted","Data":"d6a1a0455927cdb8fd79ed04008c814b244bef301a5077be9e8485bc608e5379"} Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.841052 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" path="/var/lib/kubelet/pods/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0/volumes" Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.859233 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffbaaa30-f515-494a-94af-a7a83fb44ada","Type":"ContainerStarted","Data":"351eb5141db5c89db50467f9952689f66cb8650b4e10b31c0b90e4c5183d2ee8"} Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.859299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffbaaa30-f515-494a-94af-a7a83fb44ada","Type":"ContainerStarted","Data":"6c16bff9bf25b5a0cd3fe3f261487e2bbd11f9af4e63d13936b683c0f26ad4a7"} Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.892131 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.892114642 podStartE2EDuration="2.892114642s" podCreationTimestamp="2026-02-02 11:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:40.888944341 +0000 UTC m=+1320.773137057" watchObservedRunningTime="2026-02-02 11:00:40.892114642 +0000 UTC m=+1320.776307348" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.181111 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.515679 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601515 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601765 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601809 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601907 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601946 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.602860 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs" (OuterVolumeSpecName: "logs") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.612207 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9" (OuterVolumeSpecName: "kube-api-access-8scw9") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "kube-api-access-8scw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.631282 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data" (OuterVolumeSpecName: "config-data") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.632961 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.647634 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.648999 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703703 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703736 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703745 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703754 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703765 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703774 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869338 4782 generic.go:334] "Generic (PLEG): container finished" podID="1654143f-6a4d-400a-9879-aeddb7807563" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" exitCode=0 Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869799 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerDied","Data":"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e"} Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerDied","Data":"1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c"} Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869959 4782 scope.go:117] "RemoveContainer" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.895765 4782 scope.go:117] "RemoveContainer" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.905145 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.916213 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.922749 4782 scope.go:117] "RemoveContainer" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.925135 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e\": container with ID starting with d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e not found: ID does not exist" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.925169 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e"} err="failed to get container status \"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e\": rpc error: code = NotFound desc = could not find container \"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e\": container with ID starting with d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e not found: ID does not exist" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.925210 4782 scope.go:117] "RemoveContainer" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.925625 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead\": container with ID starting with d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead not found: ID does not exist" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.925671 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead"} err="failed to get container status \"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead\": rpc error: code = NotFound desc = could not find container \"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead\": container with ID starting with d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead not found: ID does not exist" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.932426 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.932862 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.932893 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.932916 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.932925 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.933735 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.938455 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.940377 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.948159 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.948289 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.948597 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.965848 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.007945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-config-data\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.007999 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3797650-67c5-417c-9b38-52a581a6bbd3-logs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008024 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg7wl\" (UniqueName: \"kubernetes.io/projected/c3797650-67c5-417c-9b38-52a581a6bbd3-kube-api-access-tg7wl\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008066 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008117 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.109524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-config-data\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.109970 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3797650-67c5-417c-9b38-52a581a6bbd3-logs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.109995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg7wl\" (UniqueName: \"kubernetes.io/projected/c3797650-67c5-417c-9b38-52a581a6bbd3-kube-api-access-tg7wl\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110029 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110061 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3797650-67c5-417c-9b38-52a581a6bbd3-logs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.117209 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.118023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.118677 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-config-data\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.118878 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.132386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg7wl\" (UniqueName: \"kubernetes.io/projected/c3797650-67c5-417c-9b38-52a581a6bbd3-kube-api-access-tg7wl\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.281359 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.770344 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.832216 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1654143f-6a4d-400a-9879-aeddb7807563" path="/var/lib/kubelet/pods/1654143f-6a4d-400a-9879-aeddb7807563/volumes" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.880348 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3797650-67c5-417c-9b38-52a581a6bbd3","Type":"ContainerStarted","Data":"6450d74b86443cb9709b6f730ffb3efeca50f058ab740f98f73d44bc60828a54"} Feb 02 11:00:43 crc kubenswrapper[4782]: I0202 11:00:43.890054 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3797650-67c5-417c-9b38-52a581a6bbd3","Type":"ContainerStarted","Data":"c90cfd6022005ab700f8909532a9d49ab89f1d3cb4cd119bd69548e74e6a831b"} Feb 02 11:00:43 crc kubenswrapper[4782]: I0202 11:00:43.890439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3797650-67c5-417c-9b38-52a581a6bbd3","Type":"ContainerStarted","Data":"4f8bf525f8073b8f6e5a8de4fd4bb78495effe3a7f677b1c9da0997e98931f05"} Feb 02 11:00:43 crc kubenswrapper[4782]: I0202 11:00:43.911591 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.911569906 podStartE2EDuration="2.911569906s" podCreationTimestamp="2026-02-02 11:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:43.907792347 +0000 UTC m=+1323.791985073" watchObservedRunningTime="2026-02-02 11:00:43.911569906 +0000 UTC m=+1323.795762622" Feb 02 11:00:44 crc kubenswrapper[4782]: I0202 11:00:44.236741 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:00:44 crc kubenswrapper[4782]: I0202 11:00:44.236787 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:00:46 crc kubenswrapper[4782]: I0202 11:00:46.180967 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 11:00:46 crc kubenswrapper[4782]: I0202 11:00:46.206760 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 11:00:46 crc kubenswrapper[4782]: I0202 11:00:46.942440 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 11:00:49 crc kubenswrapper[4782]: I0202 11:00:49.236867 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:49 crc kubenswrapper[4782]: I0202 11:00:49.237213 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:50 crc kubenswrapper[4782]: I0202 11:00:50.251845 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ffbaaa30-f515-494a-94af-a7a83fb44ada" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:50 crc kubenswrapper[4782]: I0202 11:00:50.251874 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ffbaaa30-f515-494a-94af-a7a83fb44ada" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:51 crc kubenswrapper[4782]: I0202 11:00:51.974431 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 11:00:52 crc kubenswrapper[4782]: I0202 11:00:52.282767 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:52 crc kubenswrapper[4782]: I0202 11:00:52.283299 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:53 crc kubenswrapper[4782]: I0202 11:00:53.297865 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3797650-67c5-417c-9b38-52a581a6bbd3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:53 crc kubenswrapper[4782]: I0202 11:00:53.297928 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3797650-67c5-417c-9b38-52a581a6bbd3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.242050 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.242635 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.249430 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.250402 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.163322 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500501-wcsmz"] Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.165143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.171306 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500501-wcsmz"] Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.263831 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.263922 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.263982 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.264166 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.365735 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.366031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.366175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.366915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.372183 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.372359 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.381174 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.385515 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.494599 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.950621 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500501-wcsmz"] Feb 02 11:01:01 crc kubenswrapper[4782]: I0202 11:01:01.025132 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerStarted","Data":"7e03e111d2613782d02b9f6f81cddd918da59c7a09ed0861cd79bccaf1bd7406"} Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.034347 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerStarted","Data":"87f6259d01839c61f9982bb1cdc9bafb3471a9d3bf6c80349740746cc49506e7"} Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.057527 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500501-wcsmz" podStartSLOduration=2.057501714 podStartE2EDuration="2.057501714s" podCreationTimestamp="2026-02-02 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:02.055595989 +0000 UTC m=+1341.939788705" watchObservedRunningTime="2026-02-02 11:01:02.057501714 +0000 UTC m=+1341.941694450" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.291299 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.292003 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.292273 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.297430 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:01:03 crc kubenswrapper[4782]: I0202 11:01:03.044970 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:01:03 crc kubenswrapper[4782]: I0202 11:01:03.051920 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:01:04 crc kubenswrapper[4782]: I0202 11:01:04.054451 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerID="87f6259d01839c61f9982bb1cdc9bafb3471a9d3bf6c80349740746cc49506e7" exitCode=0 Feb 02 11:01:04 crc kubenswrapper[4782]: I0202 11:01:04.054545 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerDied","Data":"87f6259d01839c61f9982bb1cdc9bafb3471a9d3bf6c80349740746cc49506e7"} Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.357349 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455222 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455300 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455383 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455424 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.461625 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.461868 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9" (OuterVolumeSpecName: "kube-api-access-fn6m9") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "kube-api-access-fn6m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.488886 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.509058 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data" (OuterVolumeSpecName: "config-data") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.557935 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.557979 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.557992 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.558007 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:06 crc kubenswrapper[4782]: I0202 11:01:06.080083 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerDied","Data":"7e03e111d2613782d02b9f6f81cddd918da59c7a09ed0861cd79bccaf1bd7406"} Feb 02 11:01:06 crc kubenswrapper[4782]: I0202 11:01:06.080363 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e03e111d2613782d02b9f6f81cddd918da59c7a09ed0861cd79bccaf1bd7406" Feb 02 11:01:06 crc kubenswrapper[4782]: I0202 11:01:06.080136 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:11 crc kubenswrapper[4782]: I0202 11:01:11.143914 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:12 crc kubenswrapper[4782]: I0202 11:01:12.266025 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:16 crc kubenswrapper[4782]: I0202 11:01:16.634769 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" containerID="cri-o://8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7" gracePeriod=604795 Feb 02 11:01:17 crc kubenswrapper[4782]: I0202 11:01:17.095734 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" containerID="cri-o://1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" gracePeriod=604796 Feb 02 11:01:20 crc kubenswrapper[4782]: I0202 11:01:20.701462 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4782]: I0202 11:01:21.106782 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.216281 4782 generic.go:334] "Generic (PLEG): container finished" podID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerID="8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7" exitCode=0 Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.216718 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerDied","Data":"8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7"} Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.347924 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497081 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497149 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497172 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497191 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497232 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497324 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497348 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497385 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497494 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497517 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497539 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.498606 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.507032 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.507243 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.518398 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.519369 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.523730 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.524877 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77" (OuterVolumeSpecName: "kube-api-access-j8b77") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "kube-api-access-j8b77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.542210 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info" (OuterVolumeSpecName: "pod-info") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602312 4782 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602347 4782 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602358 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602366 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602388 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602401 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602412 4782 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602423 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.606418 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data" (OuterVolumeSpecName: "config-data") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.669893 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf" (OuterVolumeSpecName: "server-conf") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.704480 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.704514 4782 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.710932 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.748479 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.748492 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.807748 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.807786 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909117 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909430 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909508 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909553 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909576 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909609 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909627 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909730 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909753 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.910202 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.910664 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.911205 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.929688 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.929901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info" (OuterVolumeSpecName: "pod-info") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.930456 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.933956 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.944935 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx" (OuterVolumeSpecName: "kube-api-access-vp4jx") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "kube-api-access-vp4jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.951573 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data" (OuterVolumeSpecName: "config-data") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.006472 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf" (OuterVolumeSpecName: "server-conf") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011886 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011934 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011954 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011968 4782 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011981 4782 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011990 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012000 4782 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012026 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012040 4782 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012050 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.041485 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.087837 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.114860 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.114898 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.226665 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.226681 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerDied","Data":"f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43"} Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.226725 4782 scope.go:117] "RemoveContainer" containerID="8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.229839 4782 generic.go:334] "Generic (PLEG): container finished" podID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" exitCode=0 Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.229930 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerDied","Data":"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7"} Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.230060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerDied","Data":"541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f"} Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.229943 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.281297 4782 scope.go:117] "RemoveContainer" containerID="4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.324554 4782 scope.go:117] "RemoveContainer" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.327803 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.353842 4782 scope.go:117] "RemoveContainer" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.357873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.367759 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377359 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377915 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377931 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377945 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerName="keystone-cron" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377953 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerName="keystone-cron" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377965 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377973 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377989 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377997 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.378015 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378022 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378231 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378256 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerName="keystone-cron" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378270 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.379489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.380831 4782 scope.go:117] "RemoveContainer" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.383688 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7\": container with ID starting with 1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7 not found: ID does not exist" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.383743 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7"} err="failed to get container status \"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7\": rpc error: code = NotFound desc = could not find container \"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7\": container with ID starting with 1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7 not found: ID does not exist" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.383776 4782 scope.go:117] "RemoveContainer" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.384330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.384546 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.384759 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.385198 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb\": container with ID starting with 391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb not found: ID does not exist" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.385229 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb"} err="failed to get container status \"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb\": rpc error: code = NotFound desc = could not find container \"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb\": container with ID starting with 391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb not found: ID does not exist" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.387395 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389076 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389112 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fsk8v" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389717 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389915 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.397525 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.406447 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.408027 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.413663 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.413957 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.414161 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.420257 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.420371 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.420398 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.423986 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l8s6k" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.428902 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.525695 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.525801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjlqq\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-kube-api-access-xjlqq\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.525850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526316 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5c627ac-51a8-46a5-9ccd-62072de19909-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526536 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5c627ac-51a8-46a5-9ccd-62072de19909-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526588 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526678 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526699 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526954 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527008 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527081 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527145 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527183 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527267 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527372 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gcnz\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-kube-api-access-6gcnz\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527478 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629268 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gcnz\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-kube-api-access-6gcnz\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629657 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629675 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629707 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630480 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjlqq\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-kube-api-access-xjlqq\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630660 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5c627ac-51a8-46a5-9ccd-62072de19909-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630721 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630734 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630828 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5c627ac-51a8-46a5-9ccd-62072de19909-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630862 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630907 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630927 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630953 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.631029 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.631105 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.631411 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632017 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632280 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632326 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632785 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.633748 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.633851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634023 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634092 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634127 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634199 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.635134 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.639310 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.643613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5c627ac-51a8-46a5-9ccd-62072de19909-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.648151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.648724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.649218 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.649226 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5c627ac-51a8-46a5-9ccd-62072de19909-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.652267 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gcnz\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-kube-api-access-6gcnz\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.652926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.654899 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.655240 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjlqq\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-kube-api-access-xjlqq\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.656250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.682863 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.683524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.708437 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.741444 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.855243 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" path="/var/lib/kubelet/pods/02fc338c-2f8c-4e17-8d5f-7a919f4237a2/volumes" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.865103 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" path="/var/lib/kubelet/pods/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9/volumes" Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.223556 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:25 crc kubenswrapper[4782]: W0202 11:01:25.236275 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d450a8e_fd5c_40fe_a4ff_ab265dab04df.slice/crio-08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98 WatchSource:0}: Error finding container 08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98: Status 404 returned error can't find the container with id 08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98 Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.324613 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:25 crc kubenswrapper[4782]: W0202 11:01:25.343003 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5c627ac_51a8_46a5_9ccd_62072de19909.slice/crio-528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76 WatchSource:0}: Error finding container 528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76: Status 404 returned error can't find the container with id 528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76 Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.964319 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.972876 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.980947 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057631 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057735 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057802 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057905 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.115851 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.159513 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160251 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160337 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160558 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.161875 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.162454 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.162687 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.163001 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.163410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.258615 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerStarted","Data":"528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76"} Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.262612 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerStarted","Data":"08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98"} Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.367900 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.605849 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.122963 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.286491 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerStarted","Data":"4f96a694a6c0c378d5195b8d4732411eb5a8003c68c5e227f0522f0850f13d8c"} Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.298672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerStarted","Data":"b21fa9690961d9343a49a1e5b91cb13364053d8ff62ae86442d552092e66f129"} Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.301077 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerStarted","Data":"f83420861b6699b6337be09f1f27e658dd2355b35fe71a8cacee58949d08a256"} Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.312230 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.313344 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.315279 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.316277 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.316896 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.317456 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.441479 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.441563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.442198 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.442615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.468020 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544363 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544428 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544612 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.559815 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.560857 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.561864 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.567528 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.628679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.166679 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.313845 4782 generic.go:334] "Generic (PLEG): container finished" podID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerID="64a4e00006dcb48d8a2c4ca10d2313323ebb2c4f6e7fed9822224da02b26dc5e" exitCode=0 Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.313926 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerDied","Data":"64a4e00006dcb48d8a2c4ca10d2313323ebb2c4f6e7fed9822224da02b26dc5e"} Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.318177 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerStarted","Data":"666456605afd38f240297301ca5448a11381e94c4ea47bb87e5894af3411f5d0"} Feb 02 11:01:29 crc kubenswrapper[4782]: I0202 11:01:29.331300 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerStarted","Data":"72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da"} Feb 02 11:01:29 crc kubenswrapper[4782]: I0202 11:01:29.332818 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:29 crc kubenswrapper[4782]: I0202 11:01:29.357206 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" podStartSLOduration=4.357180861 podStartE2EDuration="4.357180861s" podCreationTimestamp="2026-02-02 11:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:29.349813939 +0000 UTC m=+1369.234006655" watchObservedRunningTime="2026-02-02 11:01:29.357180861 +0000 UTC m=+1369.241373577" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.606839 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.677746 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.680387 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" containerID="cri-o://9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d" gracePeriod=10 Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.845976 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.860363 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.860470 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.924836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.924898 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.924992 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.925010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.925226 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.925494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027845 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027970 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028056 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028939 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028978 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.029059 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.029180 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.055164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.203108 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.209084 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.183:5353: connect: connection refused" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.421422 4782 generic.go:334] "Generic (PLEG): container finished" podID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerID="9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d" exitCode=0 Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.421474 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerDied","Data":"9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d"} Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.593730 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680184 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680308 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680348 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680371 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680476 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.685404 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq" (OuterVolumeSpecName: "kube-api-access-kdkwq") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "kube-api-access-kdkwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.728321 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.731144 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config" (OuterVolumeSpecName: "config") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.750539 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.752433 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782775 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782836 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782849 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782858 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782867 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.815680 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:01:39 crc kubenswrapper[4782]: W0202 11:01:39.816228 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b42d8a9_18c7_4a14_86b0_ab5fd02a39d1.slice/crio-65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca WatchSource:0}: Error finding container 65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca: Status 404 returned error can't find the container with id 65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.453065 4782 generic.go:334] "Generic (PLEG): container finished" podID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerID="6f20530bb72c77a28a0b759dfecb8abeba2d4c4c9ec2b1e203807cb88c440c27" exitCode=0 Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.453158 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerDied","Data":"6f20530bb72c77a28a0b759dfecb8abeba2d4c4c9ec2b1e203807cb88c440c27"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.453413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerStarted","Data":"65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.456623 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerDied","Data":"cd107aafb4197d3a9b8dcab601301a7574e5c0bd0413b81852d1995de35f6645"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.456717 4782 scope.go:117] "RemoveContainer" containerID="9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.456834 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.462511 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerStarted","Data":"1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.509589 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" podStartSLOduration=2.355003128 podStartE2EDuration="13.509570041s" podCreationTimestamp="2026-02-02 11:01:27 +0000 UTC" firstStartedPulling="2026-02-02 11:01:28.172474976 +0000 UTC m=+1368.056667692" lastFinishedPulling="2026-02-02 11:01:39.327041889 +0000 UTC m=+1379.211234605" observedRunningTime="2026-02-02 11:01:40.502751695 +0000 UTC m=+1380.386944411" watchObservedRunningTime="2026-02-02 11:01:40.509570041 +0000 UTC m=+1380.393762757" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.578491 4782 scope.go:117] "RemoveContainer" containerID="3632ebdb9d373630e077154436a2fd0455ce319004676a7d22bf4fd22d09ccf1" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.683413 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.697456 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.833789 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" path="/var/lib/kubelet/pods/3e601661-fbc5-4fee-b3fb-456f6edc48f4/volumes" Feb 02 11:01:41 crc kubenswrapper[4782]: I0202 11:01:41.472005 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerStarted","Data":"699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224"} Feb 02 11:01:41 crc kubenswrapper[4782]: I0202 11:01:41.472383 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:41 crc kubenswrapper[4782]: I0202 11:01:41.495435 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" podStartSLOduration=5.495412798 podStartE2EDuration="5.495412798s" podCreationTimestamp="2026-02-02 11:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:41.490109936 +0000 UTC m=+1381.374302652" watchObservedRunningTime="2026-02-02 11:01:41.495412798 +0000 UTC m=+1381.379605524" Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.204723 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.267942 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.268202 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" containerID="cri-o://72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da" gracePeriod=10 Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.561438 4782 generic.go:334] "Generic (PLEG): container finished" podID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerID="72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da" exitCode=0 Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.562556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerDied","Data":"72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da"} Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.936159 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069313 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069397 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069419 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069498 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069536 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.074695 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz" (OuterVolumeSpecName: "kube-api-access-zfcmz") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "kube-api-access-zfcmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.133777 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config" (OuterVolumeSpecName: "config") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.135390 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.141298 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.146177 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.155336 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171583 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171623 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171649 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171658 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171668 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171676 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.572932 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerDied","Data":"4f96a694a6c0c378d5195b8d4732411eb5a8003c68c5e227f0522f0850f13d8c"} Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.572988 4782 scope.go:117] "RemoveContainer" containerID="72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.573038 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.594548 4782 scope.go:117] "RemoveContainer" containerID="64a4e00006dcb48d8a2c4ca10d2313323ebb2c4f6e7fed9822224da02b26dc5e" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.604872 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.626061 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.832343 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" path="/var/lib/kubelet/pods/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84/volumes" Feb 02 11:01:50 crc kubenswrapper[4782]: I0202 11:01:50.595305 4782 generic.go:334] "Generic (PLEG): container finished" podID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerID="1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36" exitCode=0 Feb 02 11:01:50 crc kubenswrapper[4782]: I0202 11:01:50.595405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerDied","Data":"1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36"} Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.262542 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.378899 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.379030 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8" (OuterVolumeSpecName: "kube-api-access-6rdr8") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "kube-api-access-6rdr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.416816 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory" (OuterVolumeSpecName: "inventory") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.419130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457656 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457690 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457699 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457709 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.616790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerDied","Data":"666456605afd38f240297301ca5448a11381e94c4ea47bb87e5894af3411f5d0"} Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.617030 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666456605afd38f240297301ca5448a11381e94c4ea47bb87e5894af3411f5d0" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.616856 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.737740 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738446 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.738539 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738611 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.738708 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738793 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.738862 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738941 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739004 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.739065 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739351 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739602 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739720 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739789 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.740479 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.743513 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.743603 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.743975 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.744246 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.756568 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.869431 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.870741 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.871032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.871193 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.972804 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.972899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.972986 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.973052 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.977541 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.977785 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.982058 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.992095 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:53 crc kubenswrapper[4782]: I0202 11:01:53.068856 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:53 crc kubenswrapper[4782]: I0202 11:01:53.594210 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:01:53 crc kubenswrapper[4782]: I0202 11:01:53.628039 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerStarted","Data":"9892e007cdd650bc40bcfc18f9718fe4bea24b499dbb56ceb32f38212b4a0b4c"} Feb 02 11:01:54 crc kubenswrapper[4782]: I0202 11:01:54.635519 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerStarted","Data":"47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c"} Feb 02 11:01:54 crc kubenswrapper[4782]: I0202 11:01:54.656635 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" podStartSLOduration=2.221861235 podStartE2EDuration="2.656612645s" podCreationTimestamp="2026-02-02 11:01:52 +0000 UTC" firstStartedPulling="2026-02-02 11:01:53.60081 +0000 UTC m=+1393.485002706" lastFinishedPulling="2026-02-02 11:01:54.0355614 +0000 UTC m=+1393.919754116" observedRunningTime="2026-02-02 11:01:54.647935686 +0000 UTC m=+1394.532128402" watchObservedRunningTime="2026-02-02 11:01:54.656612645 +0000 UTC m=+1394.540805361" Feb 02 11:01:58 crc kubenswrapper[4782]: I0202 11:01:58.698800 4782 generic.go:334] "Generic (PLEG): container finished" podID="8d450a8e-fd5c-40fe-a4ff-ab265dab04df" containerID="f83420861b6699b6337be09f1f27e658dd2355b35fe71a8cacee58949d08a256" exitCode=0 Feb 02 11:01:58 crc kubenswrapper[4782]: I0202 11:01:58.699399 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerDied","Data":"f83420861b6699b6337be09f1f27e658dd2355b35fe71a8cacee58949d08a256"} Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.710098 4782 generic.go:334] "Generic (PLEG): container finished" podID="b5c627ac-51a8-46a5-9ccd-62072de19909" containerID="b21fa9690961d9343a49a1e5b91cb13364053d8ff62ae86442d552092e66f129" exitCode=0 Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.710183 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerDied","Data":"b21fa9690961d9343a49a1e5b91cb13364053d8ff62ae86442d552092e66f129"} Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.712138 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerStarted","Data":"b4fad3233a676b9abbca18b3f365bd7065b34384e9bbcb3dccd20882b22a5d71"} Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.712566 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.791243 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.791218194 podStartE2EDuration="35.791218194s" podCreationTimestamp="2026-02-02 11:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:59.783096141 +0000 UTC m=+1399.667288857" watchObservedRunningTime="2026-02-02 11:01:59.791218194 +0000 UTC m=+1399.675410910" Feb 02 11:02:00 crc kubenswrapper[4782]: I0202 11:02:00.723671 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerStarted","Data":"a8e5b10f8708ed589b76977f28fa86e036b3cb9461dd8c306639ff0cc78ff17b"} Feb 02 11:02:00 crc kubenswrapper[4782]: I0202 11:02:00.723858 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 11:02:00 crc kubenswrapper[4782]: I0202 11:02:00.746086 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.746067422 podStartE2EDuration="36.746067422s" podCreationTimestamp="2026-02-02 11:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:00.743040305 +0000 UTC m=+1400.627233021" watchObservedRunningTime="2026-02-02 11:02:00.746067422 +0000 UTC m=+1400.630260138" Feb 02 11:02:14 crc kubenswrapper[4782]: I0202 11:02:14.713837 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:02:14 crc kubenswrapper[4782]: I0202 11:02:14.744846 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 11:02:22 crc kubenswrapper[4782]: I0202 11:02:22.951301 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:02:22 crc kubenswrapper[4782]: I0202 11:02:22.952902 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:02:47 crc kubenswrapper[4782]: I0202 11:02:47.751498 4782 scope.go:117] "RemoveContainer" containerID="ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6" Feb 02 11:02:52 crc kubenswrapper[4782]: I0202 11:02:52.950919 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:02:52 crc kubenswrapper[4782]: I0202 11:02:52.951439 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.235928 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.239001 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.283242 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.440371 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.440494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.440580 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542477 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542630 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542679 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542975 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.543021 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.562521 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.564658 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.030906 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.386325 4782 generic.go:334] "Generic (PLEG): container finished" podID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" exitCode=0 Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.386407 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16"} Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.386465 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerStarted","Data":"71b29af3479b58a4f03f268e4531acdc74483b21f89fd5cf404c553a55ac74b5"} Feb 02 11:03:19 crc kubenswrapper[4782]: I0202 11:03:19.400011 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerStarted","Data":"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742"} Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.951438 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952010 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952054 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952860 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952918 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca" gracePeriod=600 Feb 02 11:03:23 crc kubenswrapper[4782]: I0202 11:03:23.433856 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca" exitCode=0 Feb 02 11:03:23 crc kubenswrapper[4782]: I0202 11:03:23.433909 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca"} Feb 02 11:03:23 crc kubenswrapper[4782]: I0202 11:03:23.433948 4782 scope.go:117] "RemoveContainer" containerID="cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0" Feb 02 11:03:24 crc kubenswrapper[4782]: I0202 11:03:24.446384 4782 generic.go:334] "Generic (PLEG): container finished" podID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" exitCode=0 Feb 02 11:03:24 crc kubenswrapper[4782]: I0202 11:03:24.446464 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742"} Feb 02 11:03:24 crc kubenswrapper[4782]: I0202 11:03:24.452045 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8"} Feb 02 11:03:25 crc kubenswrapper[4782]: I0202 11:03:25.467294 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerStarted","Data":"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec"} Feb 02 11:03:25 crc kubenswrapper[4782]: I0202 11:03:25.492621 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m8c9k" podStartSLOduration=1.922370742 podStartE2EDuration="8.49259919s" podCreationTimestamp="2026-02-02 11:03:17 +0000 UTC" firstStartedPulling="2026-02-02 11:03:18.388454727 +0000 UTC m=+1478.272647443" lastFinishedPulling="2026-02-02 11:03:24.958683175 +0000 UTC m=+1484.842875891" observedRunningTime="2026-02-02 11:03:25.484620571 +0000 UTC m=+1485.368813307" watchObservedRunningTime="2026-02-02 11:03:25.49259919 +0000 UTC m=+1485.376791906" Feb 02 11:03:27 crc kubenswrapper[4782]: I0202 11:03:27.594758 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:27 crc kubenswrapper[4782]: I0202 11:03:27.595139 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:28 crc kubenswrapper[4782]: I0202 11:03:28.651684 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8c9k" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" probeResult="failure" output=< Feb 02 11:03:28 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:03:28 crc kubenswrapper[4782]: > Feb 02 11:03:38 crc kubenswrapper[4782]: I0202 11:03:38.611051 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8c9k" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" probeResult="failure" output=< Feb 02 11:03:38 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:03:38 crc kubenswrapper[4782]: > Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.618691 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.666104 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.843006 4782 scope.go:117] "RemoveContainer" containerID="602ab9da4d7f46c94dc61771e2c8b8b42a379bdff5c5bea8faa66082cc751118" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.872969 4782 scope.go:117] "RemoveContainer" containerID="d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.924018 4782 scope.go:117] "RemoveContainer" containerID="be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01" Feb 02 11:03:48 crc kubenswrapper[4782]: I0202 11:03:48.441100 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:48 crc kubenswrapper[4782]: I0202 11:03:48.805732 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m8c9k" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" containerID="cri-o://caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" gracePeriod=2 Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.271331 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.400714 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.401084 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.401136 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.402085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities" (OuterVolumeSpecName: "utilities") pod "bce8e421-84b9-4ee0-ad9e-2b3c2a796078" (UID: "bce8e421-84b9-4ee0-ad9e-2b3c2a796078"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.406489 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx" (OuterVolumeSpecName: "kube-api-access-b6zxx") pod "bce8e421-84b9-4ee0-ad9e-2b3c2a796078" (UID: "bce8e421-84b9-4ee0-ad9e-2b3c2a796078"). InnerVolumeSpecName "kube-api-access-b6zxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.504296 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.504413 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.519883 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bce8e421-84b9-4ee0-ad9e-2b3c2a796078" (UID: "bce8e421-84b9-4ee0-ad9e-2b3c2a796078"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.605693 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817712 4782 generic.go:334] "Generic (PLEG): container finished" podID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" exitCode=0 Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817765 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec"} Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817798 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"71b29af3479b58a4f03f268e4531acdc74483b21f89fd5cf404c553a55ac74b5"} Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817795 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817878 4782 scope.go:117] "RemoveContainer" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.839676 4782 scope.go:117] "RemoveContainer" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.855388 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.862865 4782 scope.go:117] "RemoveContainer" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.866255 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.910325 4782 scope.go:117] "RemoveContainer" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" Feb 02 11:03:49 crc kubenswrapper[4782]: E0202 11:03:49.910842 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec\": container with ID starting with caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec not found: ID does not exist" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.910873 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec"} err="failed to get container status \"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec\": rpc error: code = NotFound desc = could not find container \"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec\": container with ID starting with caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec not found: ID does not exist" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.910892 4782 scope.go:117] "RemoveContainer" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" Feb 02 11:03:49 crc kubenswrapper[4782]: E0202 11:03:49.911503 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742\": container with ID starting with d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742 not found: ID does not exist" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.911551 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742"} err="failed to get container status \"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742\": rpc error: code = NotFound desc = could not find container \"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742\": container with ID starting with d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742 not found: ID does not exist" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.911586 4782 scope.go:117] "RemoveContainer" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" Feb 02 11:03:49 crc kubenswrapper[4782]: E0202 11:03:49.911929 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16\": container with ID starting with ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16 not found: ID does not exist" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.911956 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16"} err="failed to get container status \"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16\": rpc error: code = NotFound desc = could not find container \"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16\": container with ID starting with ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16 not found: ID does not exist" Feb 02 11:03:50 crc kubenswrapper[4782]: I0202 11:03:50.832140 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" path="/var/lib/kubelet/pods/bce8e421-84b9-4ee0-ad9e-2b3c2a796078/volumes" Feb 02 11:05:13 crc kubenswrapper[4782]: I0202 11:05:13.578610 4782 generic.go:334] "Generic (PLEG): container finished" podID="37960174-d26b-460f-abd9-934dee1ecc8c" containerID="47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c" exitCode=0 Feb 02 11:05:13 crc kubenswrapper[4782]: I0202 11:05:13.578698 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerDied","Data":"47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c"} Feb 02 11:05:14 crc kubenswrapper[4782]: I0202 11:05:14.997595 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082259 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082325 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082550 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.087486 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4" (OuterVolumeSpecName: "kube-api-access-n5ns4") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "kube-api-access-n5ns4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.088830 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.110800 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.111200 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory" (OuterVolumeSpecName: "inventory") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184777 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184817 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184829 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184838 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.595608 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerDied","Data":"9892e007cdd650bc40bcfc18f9718fe4bea24b499dbb56ceb32f38212b4a0b4c"} Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.595859 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9892e007cdd650bc40bcfc18f9718fe4bea24b499dbb56ceb32f38212b4a0b4c" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.595840 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.682876 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683247 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-content" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683259 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-content" Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683273 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-utilities" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683279 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-utilities" Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683296 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683305 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683323 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683329 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683482 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683494 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.684088 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.686307 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.686522 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.686748 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.697330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.704807 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.793224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.793348 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.793392 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.895328 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.895453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.895502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.899145 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.900202 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.917557 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:16 crc kubenswrapper[4782]: I0202 11:05:16.000945 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:16 crc kubenswrapper[4782]: I0202 11:05:16.539084 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:05:16 crc kubenswrapper[4782]: I0202 11:05:16.605144 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerStarted","Data":"be7ce3ccb69a5745054007321e53120f9506050fe2a04ddb2bd3dfef26a90754"} Feb 02 11:05:17 crc kubenswrapper[4782]: I0202 11:05:17.618834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerStarted","Data":"65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96"} Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.080720 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" podStartSLOduration=25.671238254 podStartE2EDuration="26.080700977s" podCreationTimestamp="2026-02-02 11:05:15 +0000 UTC" firstStartedPulling="2026-02-02 11:05:16.547993778 +0000 UTC m=+1596.432186504" lastFinishedPulling="2026-02-02 11:05:16.957456521 +0000 UTC m=+1596.841649227" observedRunningTime="2026-02-02 11:05:17.64131944 +0000 UTC m=+1597.525512156" watchObservedRunningTime="2026-02-02 11:05:41.080700977 +0000 UTC m=+1620.964893693" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.088298 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.090482 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.118221 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.164440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.164519 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.164558 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266301 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266386 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266416 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266861 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.267228 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.284467 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.410176 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.080743 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.859485 4782 generic.go:334] "Generic (PLEG): container finished" podID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" exitCode=0 Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.859596 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531"} Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.859848 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerStarted","Data":"9fafdbb90ddb155007c16973867b739af133dc546a62267221c20a82aadd27f2"} Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.862472 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:05:43 crc kubenswrapper[4782]: I0202 11:05:43.870733 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerStarted","Data":"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187"} Feb 02 11:05:44 crc kubenswrapper[4782]: I0202 11:05:44.880839 4782 generic.go:334] "Generic (PLEG): container finished" podID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" exitCode=0 Feb 02 11:05:44 crc kubenswrapper[4782]: I0202 11:05:44.880943 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187"} Feb 02 11:05:45 crc kubenswrapper[4782]: I0202 11:05:45.891674 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerStarted","Data":"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d"} Feb 02 11:05:45 crc kubenswrapper[4782]: I0202 11:05:45.913794 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gknrm" podStartSLOduration=2.5067260449999997 podStartE2EDuration="4.913775026s" podCreationTimestamp="2026-02-02 11:05:41 +0000 UTC" firstStartedPulling="2026-02-02 11:05:42.862173185 +0000 UTC m=+1622.746365891" lastFinishedPulling="2026-02-02 11:05:45.269222156 +0000 UTC m=+1625.153414872" observedRunningTime="2026-02-02 11:05:45.911372157 +0000 UTC m=+1625.795564883" watchObservedRunningTime="2026-02-02 11:05:45.913775026 +0000 UTC m=+1625.797967732" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.410700 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.411682 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.463886 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.988456 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:52 crc kubenswrapper[4782]: I0202 11:05:52.067381 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:52 crc kubenswrapper[4782]: I0202 11:05:52.951490 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:05:52 crc kubenswrapper[4782]: I0202 11:05:52.951544 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:05:53 crc kubenswrapper[4782]: I0202 11:05:53.953125 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gknrm" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" containerID="cri-o://a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" gracePeriod=2 Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.442278 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.530068 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.530111 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.530161 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.533653 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities" (OuterVolumeSpecName: "utilities") pod "e4d69bec-195f-4b80-95b7-8e69a4259cc7" (UID: "e4d69bec-195f-4b80-95b7-8e69a4259cc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.535839 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7" (OuterVolumeSpecName: "kube-api-access-s26m7") pod "e4d69bec-195f-4b80-95b7-8e69a4259cc7" (UID: "e4d69bec-195f-4b80-95b7-8e69a4259cc7"). InnerVolumeSpecName "kube-api-access-s26m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.553467 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4d69bec-195f-4b80-95b7-8e69a4259cc7" (UID: "e4d69bec-195f-4b80-95b7-8e69a4259cc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.632558 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.632593 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.632605 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964080 4782 generic.go:334] "Generic (PLEG): container finished" podID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" exitCode=0 Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964121 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d"} Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964154 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"9fafdbb90ddb155007c16973867b739af133dc546a62267221c20a82aadd27f2"} Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964171 4782 scope.go:117] "RemoveContainer" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964912 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.990714 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.994257 4782 scope.go:117] "RemoveContainer" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.000124 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.014259 4782 scope.go:117] "RemoveContainer" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.055133 4782 scope.go:117] "RemoveContainer" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" Feb 02 11:05:55 crc kubenswrapper[4782]: E0202 11:05:55.055784 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d\": container with ID starting with a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d not found: ID does not exist" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.055846 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d"} err="failed to get container status \"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d\": rpc error: code = NotFound desc = could not find container \"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d\": container with ID starting with a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d not found: ID does not exist" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.055896 4782 scope.go:117] "RemoveContainer" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" Feb 02 11:05:55 crc kubenswrapper[4782]: E0202 11:05:55.056373 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187\": container with ID starting with 122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187 not found: ID does not exist" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.056413 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187"} err="failed to get container status \"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187\": rpc error: code = NotFound desc = could not find container \"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187\": container with ID starting with 122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187 not found: ID does not exist" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.056438 4782 scope.go:117] "RemoveContainer" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" Feb 02 11:05:55 crc kubenswrapper[4782]: E0202 11:05:55.056915 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531\": container with ID starting with d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531 not found: ID does not exist" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.056951 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531"} err="failed to get container status \"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531\": rpc error: code = NotFound desc = could not find container \"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531\": container with ID starting with d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531 not found: ID does not exist" Feb 02 11:05:56 crc kubenswrapper[4782]: I0202 11:05:56.835589 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" path="/var/lib/kubelet/pods/e4d69bec-195f-4b80-95b7-8e69a4259cc7/volumes" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.613168 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:02 crc kubenswrapper[4782]: E0202 11:06:02.614273 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-content" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.614291 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-content" Feb 02 11:06:02 crc kubenswrapper[4782]: E0202 11:06:02.614317 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-utilities" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.614325 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-utilities" Feb 02 11:06:02 crc kubenswrapper[4782]: E0202 11:06:02.614341 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.614349 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.615303 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.617029 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.627831 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.681528 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.682133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.682299 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.786388 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.786595 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.786729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.787181 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.788223 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.813376 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.937112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:03 crc kubenswrapper[4782]: I0202 11:06:03.400428 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:04 crc kubenswrapper[4782]: I0202 11:06:04.060432 4782 generic.go:334] "Generic (PLEG): container finished" podID="a8f72158-3325-454c-a8e2-64301e578f90" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" exitCode=0 Feb 02 11:06:04 crc kubenswrapper[4782]: I0202 11:06:04.060486 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09"} Feb 02 11:06:04 crc kubenswrapper[4782]: I0202 11:06:04.060521 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerStarted","Data":"25e7c10e9704ca146ecae8522d451a174f5be6f5c2a9cbbfbede8c6d8d3ac3b8"} Feb 02 11:06:06 crc kubenswrapper[4782]: I0202 11:06:06.080537 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerStarted","Data":"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb"} Feb 02 11:06:07 crc kubenswrapper[4782]: I0202 11:06:07.092331 4782 generic.go:334] "Generic (PLEG): container finished" podID="a8f72158-3325-454c-a8e2-64301e578f90" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" exitCode=0 Feb 02 11:06:07 crc kubenswrapper[4782]: I0202 11:06:07.092390 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb"} Feb 02 11:06:08 crc kubenswrapper[4782]: I0202 11:06:08.104454 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerStarted","Data":"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a"} Feb 02 11:06:08 crc kubenswrapper[4782]: I0202 11:06:08.130251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxp2h" podStartSLOduration=2.680014686 podStartE2EDuration="6.13022871s" podCreationTimestamp="2026-02-02 11:06:02 +0000 UTC" firstStartedPulling="2026-02-02 11:06:04.063314766 +0000 UTC m=+1643.947507472" lastFinishedPulling="2026-02-02 11:06:07.51352878 +0000 UTC m=+1647.397721496" observedRunningTime="2026-02-02 11:06:08.120587283 +0000 UTC m=+1648.004780009" watchObservedRunningTime="2026-02-02 11:06:08.13022871 +0000 UTC m=+1648.014421426" Feb 02 11:06:12 crc kubenswrapper[4782]: I0202 11:06:12.937940 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:12 crc kubenswrapper[4782]: I0202 11:06:12.938427 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:12 crc kubenswrapper[4782]: I0202 11:06:12.984474 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:13 crc kubenswrapper[4782]: I0202 11:06:13.210302 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:13 crc kubenswrapper[4782]: I0202 11:06:13.282715 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.160319 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxp2h" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" containerID="cri-o://e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" gracePeriod=2 Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.629895 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.631147 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.631763 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.657157 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731071 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"a8f72158-3325-454c-a8e2-64301e578f90\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731394 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"a8f72158-3325-454c-a8e2-64301e578f90\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731426 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"a8f72158-3325-454c-a8e2-64301e578f90\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731789 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731879 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.732351 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities" (OuterVolumeSpecName: "utilities") pod "a8f72158-3325-454c-a8e2-64301e578f90" (UID: "a8f72158-3325-454c-a8e2-64301e578f90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.753703 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28" (OuterVolumeSpecName: "kube-api-access-6vn28") pod "a8f72158-3325-454c-a8e2-64301e578f90" (UID: "a8f72158-3325-454c-a8e2-64301e578f90"). InnerVolumeSpecName "kube-api-access-6vn28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.794924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8f72158-3325-454c-a8e2-64301e578f90" (UID: "a8f72158-3325-454c-a8e2-64301e578f90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.834128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.833623 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.834607 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.835035 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.835381 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.835955 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.836071 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.836166 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.859190 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.953140 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.174591 4782 generic.go:334] "Generic (PLEG): container finished" podID="a8f72158-3325-454c-a8e2-64301e578f90" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" exitCode=0 Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.174839 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.174816 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a"} Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.176492 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"25e7c10e9704ca146ecae8522d451a174f5be6f5c2a9cbbfbede8c6d8d3ac3b8"} Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.176523 4782 scope.go:117] "RemoveContainer" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.235692 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.242524 4782 scope.go:117] "RemoveContainer" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.242729 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.281795 4782 scope.go:117] "RemoveContainer" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.344414 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.345527 4782 scope.go:117] "RemoveContainer" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" Feb 02 11:06:16 crc kubenswrapper[4782]: E0202 11:06:16.348971 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a\": container with ID starting with e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a not found: ID does not exist" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349014 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a"} err="failed to get container status \"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a\": rpc error: code = NotFound desc = could not find container \"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a\": container with ID starting with e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a not found: ID does not exist" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349042 4782 scope.go:117] "RemoveContainer" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" Feb 02 11:06:16 crc kubenswrapper[4782]: E0202 11:06:16.349557 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb\": container with ID starting with 8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb not found: ID does not exist" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349586 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb"} err="failed to get container status \"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb\": rpc error: code = NotFound desc = could not find container \"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb\": container with ID starting with 8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb not found: ID does not exist" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349603 4782 scope.go:117] "RemoveContainer" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" Feb 02 11:06:16 crc kubenswrapper[4782]: E0202 11:06:16.351087 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09\": container with ID starting with 0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09 not found: ID does not exist" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.351118 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09"} err="failed to get container status \"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09\": rpc error: code = NotFound desc = could not find container \"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09\": container with ID starting with 0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09 not found: ID does not exist" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.832192 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f72158-3325-454c-a8e2-64301e578f90" path="/var/lib/kubelet/pods/a8f72158-3325-454c-a8e2-64301e578f90/volumes" Feb 02 11:06:17 crc kubenswrapper[4782]: I0202 11:06:17.188445 4782 generic.go:334] "Generic (PLEG): container finished" podID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" exitCode=0 Feb 02 11:06:17 crc kubenswrapper[4782]: I0202 11:06:17.188509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593"} Feb 02 11:06:17 crc kubenswrapper[4782]: I0202 11:06:17.188833 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerStarted","Data":"4d2bdf4c1b73d8cd1ae616d797d6ab67314db7247425c880cb4fa4702b118dc7"} Feb 02 11:06:19 crc kubenswrapper[4782]: I0202 11:06:19.222611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerStarted","Data":"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61"} Feb 02 11:06:20 crc kubenswrapper[4782]: I0202 11:06:20.232502 4782 generic.go:334] "Generic (PLEG): container finished" podID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" exitCode=0 Feb 02 11:06:20 crc kubenswrapper[4782]: I0202 11:06:20.232758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61"} Feb 02 11:06:21 crc kubenswrapper[4782]: I0202 11:06:21.249505 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerStarted","Data":"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e"} Feb 02 11:06:21 crc kubenswrapper[4782]: I0202 11:06:21.274975 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crk2q" podStartSLOduration=2.7276333409999998 podStartE2EDuration="6.274955046s" podCreationTimestamp="2026-02-02 11:06:15 +0000 UTC" firstStartedPulling="2026-02-02 11:06:17.191037963 +0000 UTC m=+1657.075230679" lastFinishedPulling="2026-02-02 11:06:20.738359668 +0000 UTC m=+1660.622552384" observedRunningTime="2026-02-02 11:06:21.273026861 +0000 UTC m=+1661.157219577" watchObservedRunningTime="2026-02-02 11:06:21.274955046 +0000 UTC m=+1661.159147782" Feb 02 11:06:22 crc kubenswrapper[4782]: I0202 11:06:22.951786 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:06:22 crc kubenswrapper[4782]: I0202 11:06:22.952206 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:06:25 crc kubenswrapper[4782]: I0202 11:06:25.954826 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:25 crc kubenswrapper[4782]: I0202 11:06:25.955249 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:26 crc kubenswrapper[4782]: I0202 11:06:26.008242 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:26 crc kubenswrapper[4782]: I0202 11:06:26.347813 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:26 crc kubenswrapper[4782]: I0202 11:06:26.398604 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.315971 4782 generic.go:334] "Generic (PLEG): container finished" podID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerID="65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96" exitCode=0 Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.316035 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerDied","Data":"65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96"} Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.316888 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crk2q" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" containerID="cri-o://f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" gracePeriod=2 Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.763831 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.834132 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"16fe0977-c663-4e1a-97e3-7de4ae38df03\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.834210 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"16fe0977-c663-4e1a-97e3-7de4ae38df03\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.834903 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities" (OuterVolumeSpecName: "utilities") pod "16fe0977-c663-4e1a-97e3-7de4ae38df03" (UID: "16fe0977-c663-4e1a-97e3-7de4ae38df03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.835029 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"16fe0977-c663-4e1a-97e3-7de4ae38df03\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.836022 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.845874 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c" (OuterVolumeSpecName: "kube-api-access-9dq2c") pod "16fe0977-c663-4e1a-97e3-7de4ae38df03" (UID: "16fe0977-c663-4e1a-97e3-7de4ae38df03"). InnerVolumeSpecName "kube-api-access-9dq2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.878725 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16fe0977-c663-4e1a-97e3-7de4ae38df03" (UID: "16fe0977-c663-4e1a-97e3-7de4ae38df03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.938095 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.938146 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.333245 4782 generic.go:334] "Generic (PLEG): container finished" podID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" exitCode=0 Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.334194 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.335714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e"} Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.335811 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"4d2bdf4c1b73d8cd1ae616d797d6ab67314db7247425c880cb4fa4702b118dc7"} Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.335836 4782 scope.go:117] "RemoveContainer" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.414624 4782 scope.go:117] "RemoveContainer" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.421268 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.437748 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.490843 4782 scope.go:117] "RemoveContainer" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.549602 4782 scope.go:117] "RemoveContainer" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" Feb 02 11:06:29 crc kubenswrapper[4782]: E0202 11:06:29.551495 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e\": container with ID starting with f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e not found: ID does not exist" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.551533 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e"} err="failed to get container status \"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e\": rpc error: code = NotFound desc = could not find container \"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e\": container with ID starting with f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e not found: ID does not exist" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.551574 4782 scope.go:117] "RemoveContainer" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" Feb 02 11:06:29 crc kubenswrapper[4782]: E0202 11:06:29.552067 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61\": container with ID starting with c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61 not found: ID does not exist" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.552094 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61"} err="failed to get container status \"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61\": rpc error: code = NotFound desc = could not find container \"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61\": container with ID starting with c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61 not found: ID does not exist" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.552111 4782 scope.go:117] "RemoveContainer" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" Feb 02 11:06:29 crc kubenswrapper[4782]: E0202 11:06:29.556372 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593\": container with ID starting with 222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593 not found: ID does not exist" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.556410 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593"} err="failed to get container status \"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593\": rpc error: code = NotFound desc = could not find container \"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593\": container with ID starting with 222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593 not found: ID does not exist" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.938816 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.989868 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"5a24fab5-51cc-4f0a-a823-c9748efd8410\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.990533 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"5a24fab5-51cc-4f0a-a823-c9748efd8410\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.990723 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"5a24fab5-51cc-4f0a-a823-c9748efd8410\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.011758 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q" (OuterVolumeSpecName: "kube-api-access-qdg7q") pod "5a24fab5-51cc-4f0a-a823-c9748efd8410" (UID: "5a24fab5-51cc-4f0a-a823-c9748efd8410"). InnerVolumeSpecName "kube-api-access-qdg7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.029411 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a24fab5-51cc-4f0a-a823-c9748efd8410" (UID: "5a24fab5-51cc-4f0a-a823-c9748efd8410"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.040534 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory" (OuterVolumeSpecName: "inventory") pod "5a24fab5-51cc-4f0a-a823-c9748efd8410" (UID: "5a24fab5-51cc-4f0a-a823-c9748efd8410"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.093420 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.093460 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.093499 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.344598 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.344619 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerDied","Data":"be7ce3ccb69a5745054007321e53120f9506050fe2a04ddb2bd3dfef26a90754"} Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.344671 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be7ce3ccb69a5745054007321e53120f9506050fe2a04ddb2bd3dfef26a90754" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454422 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454838 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454860 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454876 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454885 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454905 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454913 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454925 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454933 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454947 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454957 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454973 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454979 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454992 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454998 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455158 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455177 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455186 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455815 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.458460 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.458989 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.459290 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.459539 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.477203 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.515387 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.515505 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.515600 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.617562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.618001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.618066 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.622245 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.623306 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.633754 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.825031 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.835339 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" path="/var/lib/kubelet/pods/16fe0977-c663-4e1a-97e3-7de4ae38df03/volumes" Feb 02 11:06:31 crc kubenswrapper[4782]: I0202 11:06:31.378695 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:06:32 crc kubenswrapper[4782]: I0202 11:06:32.370533 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerStarted","Data":"f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32"} Feb 02 11:06:32 crc kubenswrapper[4782]: I0202 11:06:32.370904 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerStarted","Data":"580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c"} Feb 02 11:06:32 crc kubenswrapper[4782]: I0202 11:06:32.389476 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" podStartSLOduration=1.9400604970000002 podStartE2EDuration="2.389455689s" podCreationTimestamp="2026-02-02 11:06:30 +0000 UTC" firstStartedPulling="2026-02-02 11:06:31.385398249 +0000 UTC m=+1671.269590965" lastFinishedPulling="2026-02-02 11:06:31.834793441 +0000 UTC m=+1671.718986157" observedRunningTime="2026-02-02 11:06:32.387218204 +0000 UTC m=+1672.271410940" watchObservedRunningTime="2026-02-02 11:06:32.389455689 +0000 UTC m=+1672.273648405" Feb 02 11:06:37 crc kubenswrapper[4782]: I0202 11:06:37.410658 4782 generic.go:334] "Generic (PLEG): container finished" podID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerID="f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32" exitCode=0 Feb 02 11:06:37 crc kubenswrapper[4782]: I0202 11:06:37.410725 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerDied","Data":"f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32"} Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.875435 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.973446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.974236 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.974378 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.978894 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc" (OuterVolumeSpecName: "kube-api-access-64tmc") pod "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" (UID: "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9"). InnerVolumeSpecName "kube-api-access-64tmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.999272 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" (UID: "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.001146 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory" (OuterVolumeSpecName: "inventory") pod "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" (UID: "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.078632 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.078917 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.079023 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.448439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerDied","Data":"580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c"} Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.448475 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.448594 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.529095 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:06:39 crc kubenswrapper[4782]: E0202 11:06:39.529513 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.529530 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.529738 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.530328 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.533157 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.534076 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.534263 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.534372 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.550758 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.591006 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.591127 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.591204 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: E0202 11:06:39.647103 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79fcc9ce_0ed4_4b7d_9b23_9a55d25349f9.slice/crio-580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c\": RecentStats: unable to find data in memory cache]" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.692973 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.693459 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.693546 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.699171 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.699676 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.708801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.856940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:41 crc kubenswrapper[4782]: I0202 11:06:41.070565 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:06:41 crc kubenswrapper[4782]: I0202 11:06:41.468342 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerStarted","Data":"603626e15d1c77272fcba0456289f5f80378cd91e749b2e5abd486589571d46a"} Feb 02 11:06:41 crc kubenswrapper[4782]: I0202 11:06:41.499611 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:06:42 crc kubenswrapper[4782]: I0202 11:06:42.481509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerStarted","Data":"d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955"} Feb 02 11:06:42 crc kubenswrapper[4782]: I0202 11:06:42.527346 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" podStartSLOduration=3.101860264 podStartE2EDuration="3.527322609s" podCreationTimestamp="2026-02-02 11:06:39 +0000 UTC" firstStartedPulling="2026-02-02 11:06:41.071436777 +0000 UTC m=+1680.955629483" lastFinishedPulling="2026-02-02 11:06:41.496899112 +0000 UTC m=+1681.381091828" observedRunningTime="2026-02-02 11:06:42.517053664 +0000 UTC m=+1682.401246390" watchObservedRunningTime="2026-02-02 11:06:42.527322609 +0000 UTC m=+1682.411515325" Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.074185 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.082804 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.094693 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.103542 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 11:06:44 crc kubenswrapper[4782]: I0202 11:06:44.830989 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" path="/var/lib/kubelet/pods/bfde9ba3-fda5-496b-8ee5-52430e61f02a/volumes" Feb 02 11:06:44 crc kubenswrapper[4782]: I0202 11:06:44.831534 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" path="/var/lib/kubelet/pods/ce57fffc-4d75-495f-b7ed-28676054f90e/volumes" Feb 02 11:06:47 crc kubenswrapper[4782]: I0202 11:06:47.034352 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 11:06:47 crc kubenswrapper[4782]: I0202 11:06:47.050669 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 11:06:48 crc kubenswrapper[4782]: I0202 11:06:48.097497 4782 scope.go:117] "RemoveContainer" containerID="2a0dfecd12eefed7e04fa4bd8706afbc2a21a95326fc9fb0c721694048febe14" Feb 02 11:06:48 crc kubenswrapper[4782]: I0202 11:06:48.118776 4782 scope.go:117] "RemoveContainer" containerID="922a5052c537ca60debaeb30c310ad62b9d6cc2296c5f5cb93deeef6d784a0c2" Feb 02 11:06:48 crc kubenswrapper[4782]: I0202 11:06:48.832280 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" path="/var/lib/kubelet/pods/d561a4a7-bb99-43c6-859e-e3269a35a073/volumes" Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.029097 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.040737 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.054424 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.064223 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.072284 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.080842 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 11:06:50 crc kubenswrapper[4782]: I0202 11:06:50.851033 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" path="/var/lib/kubelet/pods/1db12436-a377-40c9-bc4e-9fe301b0b4cb/volumes" Feb 02 11:06:50 crc kubenswrapper[4782]: I0202 11:06:50.859873 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" path="/var/lib/kubelet/pods/80dad8de-560e-4ff5-b196-aa0bbbc2be15/volumes" Feb 02 11:06:50 crc kubenswrapper[4782]: I0202 11:06:50.860877 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" path="/var/lib/kubelet/pods/b358cda4-3c47-4270-ada7-f7653d5da96f/volumes" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.951712 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.951783 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.951837 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.952689 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.952747 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" gracePeriod=600 Feb 02 11:06:53 crc kubenswrapper[4782]: E0202 11:06:53.082406 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569279 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" exitCode=0 Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569328 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8"} Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569366 4782 scope.go:117] "RemoveContainer" containerID="9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca" Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569964 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:06:53 crc kubenswrapper[4782]: E0202 11:06:53.570206 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:04 crc kubenswrapper[4782]: I0202 11:07:04.821218 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:04 crc kubenswrapper[4782]: E0202 11:07:04.821834 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.051543 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.069942 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.078151 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.089502 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.098286 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.106137 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.114459 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.121951 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.046911 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.080071 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.091284 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.104326 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.111921 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.119341 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.833482 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" path="/var/lib/kubelet/pods/29024188-b374-45b7-ad85-b2d4ca88b485/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.834156 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" path="/var/lib/kubelet/pods/53ddb047-8931-415b-8d0f-d0f73b72c8b3/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.834870 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" path="/var/lib/kubelet/pods/68e5ac2b-72a8-46be-839a-fe639916a32e/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.835506 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" path="/var/lib/kubelet/pods/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.836760 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" path="/var/lib/kubelet/pods/821635c8-3cf1-408b-8949-81dbc48b07b6/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.837360 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" path="/var/lib/kubelet/pods/b78c9d8b-0793-4e57-8a3d-ba7303f12d37/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.837978 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" path="/var/lib/kubelet/pods/c3c77267-9133-440d-9f4e-536b2a021fdc/volumes" Feb 02 11:07:15 crc kubenswrapper[4782]: I0202 11:07:15.821456 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:15 crc kubenswrapper[4782]: E0202 11:07:15.822163 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:17 crc kubenswrapper[4782]: I0202 11:07:17.033040 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 11:07:17 crc kubenswrapper[4782]: I0202 11:07:17.040995 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 11:07:18 crc kubenswrapper[4782]: I0202 11:07:18.808087 4782 generic.go:334] "Generic (PLEG): container finished" podID="425704dd-e289-42f7-8b10-bd817b279099" containerID="d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955" exitCode=0 Feb 02 11:07:18 crc kubenswrapper[4782]: I0202 11:07:18.808144 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerDied","Data":"d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955"} Feb 02 11:07:18 crc kubenswrapper[4782]: I0202 11:07:18.832377 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" path="/var/lib/kubelet/pods/843d8da2-ab8c-4938-be4b-aa67af531e1e/volumes" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.256377 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.352671 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"425704dd-e289-42f7-8b10-bd817b279099\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.352903 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"425704dd-e289-42f7-8b10-bd817b279099\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.352936 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"425704dd-e289-42f7-8b10-bd817b279099\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.370909 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5" (OuterVolumeSpecName: "kube-api-access-ckvj5") pod "425704dd-e289-42f7-8b10-bd817b279099" (UID: "425704dd-e289-42f7-8b10-bd817b279099"). InnerVolumeSpecName "kube-api-access-ckvj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.383229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "425704dd-e289-42f7-8b10-bd817b279099" (UID: "425704dd-e289-42f7-8b10-bd817b279099"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.383685 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory" (OuterVolumeSpecName: "inventory") pod "425704dd-e289-42f7-8b10-bd817b279099" (UID: "425704dd-e289-42f7-8b10-bd817b279099"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.455542 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.455587 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.455600 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.832209 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.832727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerDied","Data":"603626e15d1c77272fcba0456289f5f80378cd91e749b2e5abd486589571d46a"} Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.832778 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603626e15d1c77272fcba0456289f5f80378cd91e749b2e5abd486589571d46a" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.917505 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:07:20 crc kubenswrapper[4782]: E0202 11:07:20.917938 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="425704dd-e289-42f7-8b10-bd817b279099" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.917956 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="425704dd-e289-42f7-8b10-bd817b279099" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.918150 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="425704dd-e289-42f7-8b10-bd817b279099" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.918865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.920782 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.921212 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.922290 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.923179 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.936278 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.964454 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.964545 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.964666 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.066000 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.066336 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.066422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.071277 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.072236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.083983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.238422 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: W0202 11:07:21.810155 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b7bc661_fee9_41a6_a62e_0af1fc669e85.slice/crio-f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c WatchSource:0}: Error finding container f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c: Status 404 returned error can't find the container with id f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.810320 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.843625 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerStarted","Data":"f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c"} Feb 02 11:07:22 crc kubenswrapper[4782]: I0202 11:07:22.850627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerStarted","Data":"f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a"} Feb 02 11:07:26 crc kubenswrapper[4782]: I0202 11:07:26.821217 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:26 crc kubenswrapper[4782]: E0202 11:07:26.822031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:26 crc kubenswrapper[4782]: I0202 11:07:26.884427 4782 generic.go:334] "Generic (PLEG): container finished" podID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerID="f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a" exitCode=0 Feb 02 11:07:26 crc kubenswrapper[4782]: I0202 11:07:26.884509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerDied","Data":"f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a"} Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.377445 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.416390 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.416483 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.416582 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.451855 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5" (OuterVolumeSpecName: "kube-api-access-cdjr5") pod "9b7bc661-fee9-41a6-a62e-0af1fc669e85" (UID: "9b7bc661-fee9-41a6-a62e-0af1fc669e85"). InnerVolumeSpecName "kube-api-access-cdjr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.491990 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b7bc661-fee9-41a6-a62e-0af1fc669e85" (UID: "9b7bc661-fee9-41a6-a62e-0af1fc669e85"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.517808 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory" (OuterVolumeSpecName: "inventory") pod "9b7bc661-fee9-41a6-a62e-0af1fc669e85" (UID: "9b7bc661-fee9-41a6-a62e-0af1fc669e85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.519235 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.519258 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.519274 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.903702 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerDied","Data":"f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c"} Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.903749 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.903833 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.990087 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:07:29 crc kubenswrapper[4782]: E0202 11:07:28.990944 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.990963 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.991240 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.992133 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.995222 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.999431 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.999768 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.999919 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.035887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.035980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.036058 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.044614 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.137237 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.137321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.137393 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.142336 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.142886 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.154734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.349073 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.881670 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:07:29 crc kubenswrapper[4782]: W0202 11:07:29.895810 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd4eb6e0_afff_43a6_af04_0193fa711a9a.slice/crio-388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf WatchSource:0}: Error finding container 388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf: Status 404 returned error can't find the container with id 388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.915676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerStarted","Data":"388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf"} Feb 02 11:07:30 crc kubenswrapper[4782]: I0202 11:07:30.925808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerStarted","Data":"0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd"} Feb 02 11:07:39 crc kubenswrapper[4782]: I0202 11:07:39.822172 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:39 crc kubenswrapper[4782]: E0202 11:07:39.822983 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:43 crc kubenswrapper[4782]: I0202 11:07:43.047268 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" podStartSLOduration=14.631260372 podStartE2EDuration="15.047231944s" podCreationTimestamp="2026-02-02 11:07:28 +0000 UTC" firstStartedPulling="2026-02-02 11:07:29.899276035 +0000 UTC m=+1729.783468751" lastFinishedPulling="2026-02-02 11:07:30.315247607 +0000 UTC m=+1730.199440323" observedRunningTime="2026-02-02 11:07:30.942789707 +0000 UTC m=+1730.826982433" watchObservedRunningTime="2026-02-02 11:07:43.047231944 +0000 UTC m=+1742.931424660" Feb 02 11:07:43 crc kubenswrapper[4782]: I0202 11:07:43.053049 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 11:07:43 crc kubenswrapper[4782]: I0202 11:07:43.063690 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 11:07:44 crc kubenswrapper[4782]: I0202 11:07:44.834861 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" path="/var/lib/kubelet/pods/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0/volumes" Feb 02 11:07:47 crc kubenswrapper[4782]: I0202 11:07:47.033849 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 11:07:47 crc kubenswrapper[4782]: I0202 11:07:47.043407 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.266476 4782 scope.go:117] "RemoveContainer" containerID="86c67676caca480b43ace8b3b556dc1c7777a8a4b569eb0de34ba6545c1ccf6c" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.296768 4782 scope.go:117] "RemoveContainer" containerID="d918711ae10925784d0ab83a02dc8d40b553f98643dc5469d54bc38912d8020e" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.331381 4782 scope.go:117] "RemoveContainer" containerID="266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.404553 4782 scope.go:117] "RemoveContainer" containerID="6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.423883 4782 scope.go:117] "RemoveContainer" containerID="8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.465696 4782 scope.go:117] "RemoveContainer" containerID="9024b9d44f2a9c5bd7aaa4dc9abd2a12f77d4a6bdfae488ee552a49cc6449554" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.515984 4782 scope.go:117] "RemoveContainer" containerID="f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.535789 4782 scope.go:117] "RemoveContainer" containerID="91ff00aa29fb6af4c20c4ab6c7010da35390db314e4a9d0dc6101bd74c8cfe7c" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.556464 4782 scope.go:117] "RemoveContainer" containerID="a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.611556 4782 scope.go:117] "RemoveContainer" containerID="469ae18dd42598dba552dffdd5607faf35c16e63cd9d3d0f900d45cc0954f86f" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.628358 4782 scope.go:117] "RemoveContainer" containerID="a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.645780 4782 scope.go:117] "RemoveContainer" containerID="489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.664624 4782 scope.go:117] "RemoveContainer" containerID="642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.837945 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8943d8a-337b-4852-9c11-55191a08a850" path="/var/lib/kubelet/pods/f8943d8a-337b-4852-9c11-55191a08a850/volumes" Feb 02 11:07:51 crc kubenswrapper[4782]: I0202 11:07:51.094630 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 11:07:51 crc kubenswrapper[4782]: I0202 11:07:51.107235 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 11:07:52 crc kubenswrapper[4782]: I0202 11:07:52.833674 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173458b2-9a63-4456-9bc9-698d1414a679" path="/var/lib/kubelet/pods/173458b2-9a63-4456-9bc9-698d1414a679/volumes" Feb 02 11:07:54 crc kubenswrapper[4782]: I0202 11:07:54.822370 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:54 crc kubenswrapper[4782]: E0202 11:07:54.822924 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:55 crc kubenswrapper[4782]: I0202 11:07:55.039546 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 11:07:55 crc kubenswrapper[4782]: I0202 11:07:55.051397 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 11:07:56 crc kubenswrapper[4782]: I0202 11:07:56.834199 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" path="/var/lib/kubelet/pods/f45d6513-2de0-4ece-bbbc-26c6780cd145/volumes" Feb 02 11:08:07 crc kubenswrapper[4782]: I0202 11:08:07.034441 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 11:08:07 crc kubenswrapper[4782]: I0202 11:08:07.045898 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 11:08:07 crc kubenswrapper[4782]: I0202 11:08:07.821840 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:07 crc kubenswrapper[4782]: E0202 11:08:07.822169 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:08 crc kubenswrapper[4782]: I0202 11:08:08.834525 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" path="/var/lib/kubelet/pods/14e3fab7-be93-409c-a88e-85c8d0ca533c/volumes" Feb 02 11:08:12 crc kubenswrapper[4782]: I0202 11:08:12.047095 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 11:08:12 crc kubenswrapper[4782]: I0202 11:08:12.061030 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 11:08:12 crc kubenswrapper[4782]: I0202 11:08:12.833277 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" path="/var/lib/kubelet/pods/bf4fe919-15fe-4478-be0f-8e3bf00147b4/volumes" Feb 02 11:08:18 crc kubenswrapper[4782]: I0202 11:08:18.362623 4782 generic.go:334] "Generic (PLEG): container finished" podID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerID="0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd" exitCode=0 Feb 02 11:08:18 crc kubenswrapper[4782]: I0202 11:08:18.363265 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerDied","Data":"0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd"} Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.816218 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.901818 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.902050 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.902104 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.913981 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4" (OuterVolumeSpecName: "kube-api-access-j4hj4") pod "fd4eb6e0-afff-43a6-af04-0193fa711a9a" (UID: "fd4eb6e0-afff-43a6-af04-0193fa711a9a"). InnerVolumeSpecName "kube-api-access-j4hj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.929878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd4eb6e0-afff-43a6-af04-0193fa711a9a" (UID: "fd4eb6e0-afff-43a6-af04-0193fa711a9a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.935633 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory" (OuterVolumeSpecName: "inventory") pod "fd4eb6e0-afff-43a6-af04-0193fa711a9a" (UID: "fd4eb6e0-afff-43a6-af04-0193fa711a9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.009081 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.009138 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.009153 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.382038 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerDied","Data":"388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf"} Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.382080 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.382095 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.475967 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:08:20 crc kubenswrapper[4782]: E0202 11:08:20.476324 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.476338 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.476479 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.477052 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.480029 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.481914 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.482162 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.482282 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.495925 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.621781 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.622480 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.622744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.725127 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.725311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.726508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.735607 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.735694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.749416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.803177 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:21 crc kubenswrapper[4782]: I0202 11:08:21.362624 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:08:21 crc kubenswrapper[4782]: I0202 11:08:21.395881 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerStarted","Data":"dda7b8319db5e2d132b70457bf315d27e55157a4d078744fcda722fc4a503506"} Feb 02 11:08:21 crc kubenswrapper[4782]: I0202 11:08:21.822251 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:21 crc kubenswrapper[4782]: E0202 11:08:21.822605 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:22 crc kubenswrapper[4782]: I0202 11:08:22.412840 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerStarted","Data":"a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419"} Feb 02 11:08:22 crc kubenswrapper[4782]: I0202 11:08:22.437933 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" podStartSLOduration=2.027112531 podStartE2EDuration="2.437910265s" podCreationTimestamp="2026-02-02 11:08:20 +0000 UTC" firstStartedPulling="2026-02-02 11:08:21.376930119 +0000 UTC m=+1781.261122835" lastFinishedPulling="2026-02-02 11:08:21.787727853 +0000 UTC m=+1781.671920569" observedRunningTime="2026-02-02 11:08:22.426101675 +0000 UTC m=+1782.310294391" watchObservedRunningTime="2026-02-02 11:08:22.437910265 +0000 UTC m=+1782.322102981" Feb 02 11:08:29 crc kubenswrapper[4782]: I0202 11:08:29.474811 4782 generic.go:334] "Generic (PLEG): container finished" podID="96961a4d-2144-4ca9-852f-f624c591bf50" containerID="a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419" exitCode=0 Feb 02 11:08:29 crc kubenswrapper[4782]: I0202 11:08:29.474971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerDied","Data":"a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419"} Feb 02 11:08:30 crc kubenswrapper[4782]: I0202 11:08:30.958747 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.020136 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"96961a4d-2144-4ca9-852f-f624c591bf50\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.020244 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"96961a4d-2144-4ca9-852f-f624c591bf50\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.020295 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"96961a4d-2144-4ca9-852f-f624c591bf50\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.030841 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs" (OuterVolumeSpecName: "kube-api-access-k2wvs") pod "96961a4d-2144-4ca9-852f-f624c591bf50" (UID: "96961a4d-2144-4ca9-852f-f624c591bf50"). InnerVolumeSpecName "kube-api-access-k2wvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.054560 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "96961a4d-2144-4ca9-852f-f624c591bf50" (UID: "96961a4d-2144-4ca9-852f-f624c591bf50"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.094088 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96961a4d-2144-4ca9-852f-f624c591bf50" (UID: "96961a4d-2144-4ca9-852f-f624c591bf50"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.122862 4782 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.122907 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.122929 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.492488 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerDied","Data":"dda7b8319db5e2d132b70457bf315d27e55157a4d078744fcda722fc4a503506"} Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.492529 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda7b8319db5e2d132b70457bf315d27e55157a4d078744fcda722fc4a503506" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.492597 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.576501 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:08:31 crc kubenswrapper[4782]: E0202 11:08:31.576998 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.577024 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.577240 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.578034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.580981 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.584040 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.584255 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.585817 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.604260 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.631705 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.631776 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.632130 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.733453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.733516 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.733684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.738109 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.739257 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.754136 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.908247 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:32 crc kubenswrapper[4782]: I0202 11:08:32.472168 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:08:32 crc kubenswrapper[4782]: I0202 11:08:32.504022 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerStarted","Data":"6f5e81442e7a3835f18b2d3c1891db6eb302fa70bd8b3812f6c59e9ba5490a3c"} Feb 02 11:08:33 crc kubenswrapper[4782]: I0202 11:08:33.530145 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerStarted","Data":"602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792"} Feb 02 11:08:33 crc kubenswrapper[4782]: I0202 11:08:33.821509 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:33 crc kubenswrapper[4782]: E0202 11:08:33.821770 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:41 crc kubenswrapper[4782]: I0202 11:08:41.597062 4782 generic.go:334] "Generic (PLEG): container finished" podID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerID="602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792" exitCode=0 Feb 02 11:08:41 crc kubenswrapper[4782]: I0202 11:08:41.597113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerDied","Data":"602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792"} Feb 02 11:08:42 crc kubenswrapper[4782]: I0202 11:08:42.991736 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.030171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.030404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.030470 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.038206 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz" (OuterVolumeSpecName: "kube-api-access-t98jz") pod "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" (UID: "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63"). InnerVolumeSpecName "kube-api-access-t98jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.056793 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory" (OuterVolumeSpecName: "inventory") pod "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" (UID: "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.056835 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" (UID: "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.132767 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.132832 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.132845 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.614335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerDied","Data":"6f5e81442e7a3835f18b2d3c1891db6eb302fa70bd8b3812f6c59e9ba5490a3c"} Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.614383 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f5e81442e7a3835f18b2d3c1891db6eb302fa70bd8b3812f6c59e9ba5490a3c" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.614394 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.708127 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:08:43 crc kubenswrapper[4782]: E0202 11:08:43.708509 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.708535 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.708779 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.716061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.717764 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.717850 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.718577 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.718813 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.719284 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.845461 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.845529 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.845611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.947259 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.947455 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.947514 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.958321 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.965117 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.982254 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:44 crc kubenswrapper[4782]: I0202 11:08:44.040455 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:44 crc kubenswrapper[4782]: I0202 11:08:44.567966 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:08:44 crc kubenswrapper[4782]: I0202 11:08:44.624943 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerStarted","Data":"89f6caa577019f7d8bf7fd27767a38e45ca905d949b88e53e2d3fdab397bb35f"} Feb 02 11:08:45 crc kubenswrapper[4782]: I0202 11:08:45.637634 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerStarted","Data":"1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd"} Feb 02 11:08:45 crc kubenswrapper[4782]: I0202 11:08:45.667264 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" podStartSLOduration=2.201148819 podStartE2EDuration="2.667247342s" podCreationTimestamp="2026-02-02 11:08:43 +0000 UTC" firstStartedPulling="2026-02-02 11:08:44.568166782 +0000 UTC m=+1804.452359518" lastFinishedPulling="2026-02-02 11:08:45.034265315 +0000 UTC m=+1804.918458041" observedRunningTime="2026-02-02 11:08:45.658835461 +0000 UTC m=+1805.543028197" watchObservedRunningTime="2026-02-02 11:08:45.667247342 +0000 UTC m=+1805.551440048" Feb 02 11:08:46 crc kubenswrapper[4782]: I0202 11:08:46.821337 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:46 crc kubenswrapper[4782]: E0202 11:08:46.821592 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:48 crc kubenswrapper[4782]: I0202 11:08:48.916038 4782 scope.go:117] "RemoveContainer" containerID="882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549" Feb 02 11:08:48 crc kubenswrapper[4782]: I0202 11:08:48.952768 4782 scope.go:117] "RemoveContainer" containerID="f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254" Feb 02 11:08:49 crc kubenswrapper[4782]: I0202 11:08:49.002759 4782 scope.go:117] "RemoveContainer" containerID="86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d" Feb 02 11:08:49 crc kubenswrapper[4782]: I0202 11:08:49.033167 4782 scope.go:117] "RemoveContainer" containerID="bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5" Feb 02 11:08:49 crc kubenswrapper[4782]: I0202 11:08:49.089742 4782 scope.go:117] "RemoveContainer" containerID="e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4" Feb 02 11:08:54 crc kubenswrapper[4782]: I0202 11:08:54.714476 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerID="1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd" exitCode=0 Feb 02 11:08:54 crc kubenswrapper[4782]: I0202 11:08:54.714670 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerDied","Data":"1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd"} Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.063679 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.092812 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.098844 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.110921 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.122670 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.130082 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.143443 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.150984 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.160776 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.162185 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.162331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.162547 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.174230 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq" (OuterVolumeSpecName: "kube-api-access-djdgq") pod "ce2c78bc-99b3-4deb-871f-923a3a42d5ff" (UID: "ce2c78bc-99b3-4deb-871f-923a3a42d5ff"). InnerVolumeSpecName "kube-api-access-djdgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.204379 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce2c78bc-99b3-4deb-871f-923a3a42d5ff" (UID: "ce2c78bc-99b3-4deb-871f-923a3a42d5ff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.218481 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory" (OuterVolumeSpecName: "inventory") pod "ce2c78bc-99b3-4deb-871f-923a3a42d5ff" (UID: "ce2c78bc-99b3-4deb-871f-923a3a42d5ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.265177 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.265208 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.265220 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.730998 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerDied","Data":"89f6caa577019f7d8bf7fd27767a38e45ca905d949b88e53e2d3fdab397bb35f"} Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.731042 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f6caa577019f7d8bf7fd27767a38e45ca905d949b88e53e2d3fdab397bb35f" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.731108 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.831444 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" path="/var/lib/kubelet/pods/07bbffca-46a4-4693-ae3f-011a5ee0e317/volumes" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.832395 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" path="/var/lib/kubelet/pods/0abc6f3c-1f7d-4f48-8beb-205307984cdc/volumes" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.833121 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" path="/var/lib/kubelet/pods/6a9a0fe2-4862-47e1-91d0-553d95235f39/volumes" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.833802 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" path="/var/lib/kubelet/pods/8b55df6c-8971-415a-a934-0ec48a149b81/volumes" Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.039993 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.051048 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.061086 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.068782 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 11:08:58 crc kubenswrapper[4782]: I0202 11:08:58.831954 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" path="/var/lib/kubelet/pods/a9b75d8c-9435-483f-8e95-97690314cfb5/volumes" Feb 02 11:08:58 crc kubenswrapper[4782]: I0202 11:08:58.832516 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" path="/var/lib/kubelet/pods/c5eccd3e-f895-4c2f-a1e5-c337a89d2439/volumes" Feb 02 11:09:01 crc kubenswrapper[4782]: I0202 11:09:01.821149 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:01 crc kubenswrapper[4782]: E0202 11:09:01.821992 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:16 crc kubenswrapper[4782]: I0202 11:09:16.828621 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:16 crc kubenswrapper[4782]: E0202 11:09:16.829505 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:25 crc kubenswrapper[4782]: I0202 11:09:25.069854 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 11:09:25 crc kubenswrapper[4782]: I0202 11:09:25.080205 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 11:09:26 crc kubenswrapper[4782]: I0202 11:09:26.833954 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" path="/var/lib/kubelet/pods/f0b52751-0177-4fa7-8d87-fca1cab9a096/volumes" Feb 02 11:09:30 crc kubenswrapper[4782]: I0202 11:09:30.826306 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:30 crc kubenswrapper[4782]: E0202 11:09:30.826760 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:44 crc kubenswrapper[4782]: I0202 11:09:44.821949 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:44 crc kubenswrapper[4782]: E0202 11:09:44.822786 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.242610 4782 scope.go:117] "RemoveContainer" containerID="59303b0d2ccdb82f829b283e200498f7a3b29c09b53180da767f3025ed87821c" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.275563 4782 scope.go:117] "RemoveContainer" containerID="938ebd6bbe46ccc6431b3d92e3b6f8803ade372fd58b9fe07b9f065675fc25c4" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.307441 4782 scope.go:117] "RemoveContainer" containerID="ec0e250135ad643a0376384574da7a7800b3dd64125badf008a16dd100e20d1b" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.344043 4782 scope.go:117] "RemoveContainer" containerID="0e49e7a8577a45cc63b62f6e59ab36faf5118b4383cec84de5d8b281d39fd041" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.385936 4782 scope.go:117] "RemoveContainer" containerID="23b00d976eb1b671e56fad532c67af4f3f0fb48695bf8d78a64de1654d16975f" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.429780 4782 scope.go:117] "RemoveContainer" containerID="7930f426b6752d1ae7cd1f189cd59a47f3d0e3b099f35ef79f6b78e86ed5ab0d" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.475382 4782 scope.go:117] "RemoveContainer" containerID="c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45" Feb 02 11:09:50 crc kubenswrapper[4782]: I0202 11:09:50.032869 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 11:09:50 crc kubenswrapper[4782]: I0202 11:09:50.041315 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 11:09:50 crc kubenswrapper[4782]: I0202 11:09:50.832442 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" path="/var/lib/kubelet/pods/baa0ea9b-5d59-4094-a259-2f841d40db2c/volumes" Feb 02 11:09:51 crc kubenswrapper[4782]: I0202 11:09:51.051086 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 11:09:51 crc kubenswrapper[4782]: I0202 11:09:51.059496 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 11:09:52 crc kubenswrapper[4782]: I0202 11:09:52.832300 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" path="/var/lib/kubelet/pods/5d87918f-7c3d-4932-a4bd-18a2cf9fc199/volumes" Feb 02 11:09:59 crc kubenswrapper[4782]: I0202 11:09:59.821449 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:59 crc kubenswrapper[4782]: E0202 11:09:59.822085 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:11 crc kubenswrapper[4782]: I0202 11:10:11.821266 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:11 crc kubenswrapper[4782]: E0202 11:10:11.822121 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:26 crc kubenswrapper[4782]: I0202 11:10:26.821629 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:26 crc kubenswrapper[4782]: E0202 11:10:26.822939 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:33 crc kubenswrapper[4782]: I0202 11:10:33.042497 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:10:33 crc kubenswrapper[4782]: I0202 11:10:33.049877 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:10:34 crc kubenswrapper[4782]: I0202 11:10:34.832355 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" path="/var/lib/kubelet/pods/d921bd77-679d-4722-8238-a75dc4f3b6b5/volumes" Feb 02 11:10:41 crc kubenswrapper[4782]: I0202 11:10:41.820711 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:41 crc kubenswrapper[4782]: E0202 11:10:41.821502 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:49 crc kubenswrapper[4782]: I0202 11:10:49.657346 4782 scope.go:117] "RemoveContainer" containerID="730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb" Feb 02 11:10:49 crc kubenswrapper[4782]: I0202 11:10:49.715870 4782 scope.go:117] "RemoveContainer" containerID="31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f" Feb 02 11:10:49 crc kubenswrapper[4782]: I0202 11:10:49.762484 4782 scope.go:117] "RemoveContainer" containerID="8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce" Feb 02 11:10:56 crc kubenswrapper[4782]: I0202 11:10:56.822028 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:56 crc kubenswrapper[4782]: E0202 11:10:56.822787 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:07 crc kubenswrapper[4782]: I0202 11:11:07.821835 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:07 crc kubenswrapper[4782]: E0202 11:11:07.823819 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:18 crc kubenswrapper[4782]: I0202 11:11:18.821247 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:18 crc kubenswrapper[4782]: E0202 11:11:18.822005 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:30 crc kubenswrapper[4782]: I0202 11:11:30.825803 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:30 crc kubenswrapper[4782]: E0202 11:11:30.826542 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:41 crc kubenswrapper[4782]: I0202 11:11:41.821077 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:41 crc kubenswrapper[4782]: E0202 11:11:41.822024 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:56 crc kubenswrapper[4782]: I0202 11:11:56.825233 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:57 crc kubenswrapper[4782]: I0202 11:11:57.182384 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296"} Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.769735 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:13:50 crc kubenswrapper[4782]: E0202 11:13:50.770711 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.770729 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.770925 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.779588 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.792866 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.882850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.882899 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.883057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.984391 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.984508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.984527 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.985065 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.985101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:51 crc kubenswrapper[4782]: I0202 11:13:51.007033 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:51 crc kubenswrapper[4782]: I0202 11:13:51.101885 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:51 crc kubenswrapper[4782]: I0202 11:13:51.617608 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.094141 4782 generic.go:334] "Generic (PLEG): container finished" podID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" exitCode=0 Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.094190 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69"} Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.094219 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerStarted","Data":"5a67a52e538a73e40ce3e5ef726731d2f0ea703d21e46a76a2b3a75ad88b7041"} Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.096453 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:13:54 crc kubenswrapper[4782]: I0202 11:13:54.112528 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerStarted","Data":"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed"} Feb 02 11:13:59 crc kubenswrapper[4782]: E0202 11:13:59.611125 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode98159ff_6334_40e3_8649_a4b880e9dcca.slice/crio-conmon-ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:14:00 crc kubenswrapper[4782]: I0202 11:14:00.166916 4782 generic.go:334] "Generic (PLEG): container finished" podID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" exitCode=0 Feb 02 11:14:00 crc kubenswrapper[4782]: I0202 11:14:00.166958 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed"} Feb 02 11:14:01 crc kubenswrapper[4782]: I0202 11:14:01.179358 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerStarted","Data":"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd"} Feb 02 11:14:01 crc kubenswrapper[4782]: I0202 11:14:01.197953 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bbdvz" podStartSLOduration=2.66942593 podStartE2EDuration="11.197934565s" podCreationTimestamp="2026-02-02 11:13:50 +0000 UTC" firstStartedPulling="2026-02-02 11:13:52.096191069 +0000 UTC m=+2111.980383785" lastFinishedPulling="2026-02-02 11:14:00.624699704 +0000 UTC m=+2120.508892420" observedRunningTime="2026-02-02 11:14:01.196404031 +0000 UTC m=+2121.080596747" watchObservedRunningTime="2026-02-02 11:14:01.197934565 +0000 UTC m=+2121.082127281" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.102847 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.103461 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.158384 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.298171 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.403282 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.267687 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbdvz" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" containerID="cri-o://98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" gracePeriod=2 Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.701181 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.809498 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"e98159ff-6334-40e3-8649-a4b880e9dcca\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.809578 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"e98159ff-6334-40e3-8649-a4b880e9dcca\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.809890 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"e98159ff-6334-40e3-8649-a4b880e9dcca\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.810712 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities" (OuterVolumeSpecName: "utilities") pod "e98159ff-6334-40e3-8649-a4b880e9dcca" (UID: "e98159ff-6334-40e3-8649-a4b880e9dcca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.810983 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.826381 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w" (OuterVolumeSpecName: "kube-api-access-7qd8w") pod "e98159ff-6334-40e3-8649-a4b880e9dcca" (UID: "e98159ff-6334-40e3-8649-a4b880e9dcca"). InnerVolumeSpecName "kube-api-access-7qd8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.912720 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.950032 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e98159ff-6334-40e3-8649-a4b880e9dcca" (UID: "e98159ff-6334-40e3-8649-a4b880e9dcca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.014605 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.276944 4782 generic.go:334] "Generic (PLEG): container finished" podID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" exitCode=0 Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.276991 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd"} Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.277058 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"5a67a52e538a73e40ce3e5ef726731d2f0ea703d21e46a76a2b3a75ad88b7041"} Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.277080 4782 scope.go:117] "RemoveContainer" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.277744 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.294978 4782 scope.go:117] "RemoveContainer" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.308914 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.321385 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.330327 4782 scope.go:117] "RemoveContainer" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.365597 4782 scope.go:117] "RemoveContainer" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" Feb 02 11:14:14 crc kubenswrapper[4782]: E0202 11:14:14.366540 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd\": container with ID starting with 98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd not found: ID does not exist" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.366615 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd"} err="failed to get container status \"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd\": rpc error: code = NotFound desc = could not find container \"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd\": container with ID starting with 98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd not found: ID does not exist" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.366664 4782 scope.go:117] "RemoveContainer" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" Feb 02 11:14:14 crc kubenswrapper[4782]: E0202 11:14:14.367166 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed\": container with ID starting with ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed not found: ID does not exist" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.367191 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed"} err="failed to get container status \"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed\": rpc error: code = NotFound desc = could not find container \"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed\": container with ID starting with ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed not found: ID does not exist" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.367205 4782 scope.go:117] "RemoveContainer" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" Feb 02 11:14:14 crc kubenswrapper[4782]: E0202 11:14:14.367979 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69\": container with ID starting with 48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69 not found: ID does not exist" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.368003 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69"} err="failed to get container status \"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69\": rpc error: code = NotFound desc = could not find container \"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69\": container with ID starting with 48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69 not found: ID does not exist" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.833477 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" path="/var/lib/kubelet/pods/e98159ff-6334-40e3-8649-a4b880e9dcca/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.087827 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.101485 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.112497 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.124402 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.132393 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.140113 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.147898 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.155705 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.163257 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.170873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.178860 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.185686 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.192490 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.197550 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.203741 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.211683 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.219429 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.228526 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.236655 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.243817 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.835443 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" path="/var/lib/kubelet/pods/37960174-d26b-460f-abd9-934dee1ecc8c/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.837288 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="425704dd-e289-42f7-8b10-bd817b279099" path="/var/lib/kubelet/pods/425704dd-e289-42f7-8b10-bd817b279099/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.838045 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" path="/var/lib/kubelet/pods/5a24fab5-51cc-4f0a-a823-c9748efd8410/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.838749 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" path="/var/lib/kubelet/pods/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.844612 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" path="/var/lib/kubelet/pods/96961a4d-2144-4ca9-852f-f624c591bf50/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.845588 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" path="/var/lib/kubelet/pods/99553aeb-f0fe-47e8-9d2a-64f4b49be76c/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.846433 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" path="/var/lib/kubelet/pods/9b7bc661-fee9-41a6-a62e-0af1fc669e85/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.848011 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" path="/var/lib/kubelet/pods/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.848690 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" path="/var/lib/kubelet/pods/ce2c78bc-99b3-4deb-871f-923a3a42d5ff/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.849409 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" path="/var/lib/kubelet/pods/fd4eb6e0-afff-43a6-af04-0193fa711a9a/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.952783 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.953300 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.513158 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw"] Feb 02 11:14:35 crc kubenswrapper[4782]: E0202 11:14:35.513990 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-utilities" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514002 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-utilities" Feb 02 11:14:35 crc kubenswrapper[4782]: E0202 11:14:35.514014 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-content" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514021 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-content" Feb 02 11:14:35 crc kubenswrapper[4782]: E0202 11:14:35.514033 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514039 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514179 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514904 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.518859 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.519069 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.519917 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.520172 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.520385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.531818 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw"] Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571467 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571517 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571686 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571880 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674491 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674568 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674963 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.675265 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.683223 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.683252 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.684321 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.692758 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.696825 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.836483 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:36 crc kubenswrapper[4782]: I0202 11:14:36.440316 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw"] Feb 02 11:14:36 crc kubenswrapper[4782]: W0202 11:14:36.441180 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cede59e_7f51_455a_8405_3ae76f40e348.slice/crio-d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82 WatchSource:0}: Error finding container d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82: Status 404 returned error can't find the container with id d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82 Feb 02 11:14:36 crc kubenswrapper[4782]: I0202 11:14:36.453918 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerStarted","Data":"d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82"} Feb 02 11:14:37 crc kubenswrapper[4782]: I0202 11:14:37.463479 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerStarted","Data":"9121cc1487cd3f8f2a666f0332657834074163a1fe3c409b901a968edf1ab0b2"} Feb 02 11:14:37 crc kubenswrapper[4782]: I0202 11:14:37.492093 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" podStartSLOduration=1.913352739 podStartE2EDuration="2.492067734s" podCreationTimestamp="2026-02-02 11:14:35 +0000 UTC" firstStartedPulling="2026-02-02 11:14:36.443071696 +0000 UTC m=+2156.327264412" lastFinishedPulling="2026-02-02 11:14:37.021786651 +0000 UTC m=+2156.905979407" observedRunningTime="2026-02-02 11:14:37.476615029 +0000 UTC m=+2157.360807745" watchObservedRunningTime="2026-02-02 11:14:37.492067734 +0000 UTC m=+2157.376260490" Feb 02 11:14:48 crc kubenswrapper[4782]: I0202 11:14:48.545759 4782 generic.go:334] "Generic (PLEG): container finished" podID="6cede59e-7f51-455a-8405-3ae76f40e348" containerID="9121cc1487cd3f8f2a666f0332657834074163a1fe3c409b901a968edf1ab0b2" exitCode=0 Feb 02 11:14:48 crc kubenswrapper[4782]: I0202 11:14:48.545869 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerDied","Data":"9121cc1487cd3f8f2a666f0332657834074163a1fe3c409b901a968edf1ab0b2"} Feb 02 11:14:49 crc kubenswrapper[4782]: I0202 11:14:49.908479 4782 scope.go:117] "RemoveContainer" containerID="f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32" Feb 02 11:14:49 crc kubenswrapper[4782]: I0202 11:14:49.990094 4782 scope.go:117] "RemoveContainer" containerID="a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.020382 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.056623 4782 scope.go:117] "RemoveContainer" containerID="47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153623 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153786 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153834 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153882 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153918 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.162591 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.164933 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh" (OuterVolumeSpecName: "kube-api-access-wblkh") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "kube-api-access-wblkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.166975 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph" (OuterVolumeSpecName: "ceph") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.184876 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory" (OuterVolumeSpecName: "inventory") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.201781 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.215435 4782 scope.go:117] "RemoveContainer" containerID="1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.246365 4782 scope.go:117] "RemoveContainer" containerID="602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255705 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255738 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255759 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255780 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255790 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.277364 4782 scope.go:117] "RemoveContainer" containerID="65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.338597 4782 scope.go:117] "RemoveContainer" containerID="d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.405541 4782 scope.go:117] "RemoveContainer" containerID="0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.468467 4782 scope.go:117] "RemoveContainer" containerID="f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.499187 4782 scope.go:117] "RemoveContainer" containerID="1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.566687 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerDied","Data":"d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82"} Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.566874 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.566987 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.655874 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch"] Feb 02 11:14:50 crc kubenswrapper[4782]: E0202 11:14:50.656277 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cede59e-7f51-455a-8405-3ae76f40e348" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.656297 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cede59e-7f51-455a-8405-3ae76f40e348" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.656479 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cede59e-7f51-455a-8405-3ae76f40e348" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.658932 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662318 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662473 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662539 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662675 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662782 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.668592 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch"] Feb 02 11:14:50 crc kubenswrapper[4782]: E0202 11:14:50.736817 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cede59e_7f51_455a_8405_3ae76f40e348.slice/crio-d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82\": RecentStats: unable to find data in memory cache]" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763570 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763722 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763751 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763784 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.865967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866172 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866229 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866271 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.872438 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.873144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.882671 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.884966 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.885724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:51 crc kubenswrapper[4782]: I0202 11:14:51.064282 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:51 crc kubenswrapper[4782]: I0202 11:14:51.623791 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch"] Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.595008 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerStarted","Data":"69de4465f107269ffd74eebaf2f980c3701cfb9aad1cfbb6b352c4678c7d6844"} Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.595351 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerStarted","Data":"05f3f00be72c9cbb26f985a3b0b234e1373a612190801d2ad37e870a76ce2098"} Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.613225 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" podStartSLOduration=2.090350802 podStartE2EDuration="2.61320697s" podCreationTimestamp="2026-02-02 11:14:50 +0000 UTC" firstStartedPulling="2026-02-02 11:14:51.630504489 +0000 UTC m=+2171.514697205" lastFinishedPulling="2026-02-02 11:14:52.153360657 +0000 UTC m=+2172.037553373" observedRunningTime="2026-02-02 11:14:52.610507392 +0000 UTC m=+2172.494700108" watchObservedRunningTime="2026-02-02 11:14:52.61320697 +0000 UTC m=+2172.497399686" Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.952197 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.952255 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.134554 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.141054 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.145244 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.145750 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.152415 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.241055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.241129 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.241393 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.343885 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.344079 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.344122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.345297 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.352431 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.366500 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.476106 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.922029 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 11:15:01 crc kubenswrapper[4782]: I0202 11:15:01.670518 4782 generic.go:334] "Generic (PLEG): container finished" podID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerID="f26205a7a090662d7627013616952d20f36db3d708e8b6aa67a214bacd583878" exitCode=0 Feb 02 11:15:01 crc kubenswrapper[4782]: I0202 11:15:01.670675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" event={"ID":"49267abf-7f15-4460-bbc4-d7b0cc162817","Type":"ContainerDied","Data":"f26205a7a090662d7627013616952d20f36db3d708e8b6aa67a214bacd583878"} Feb 02 11:15:01 crc kubenswrapper[4782]: I0202 11:15:01.671152 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" event={"ID":"49267abf-7f15-4460-bbc4-d7b0cc162817","Type":"ContainerStarted","Data":"e9a23527cd890dbeb14872b2ce5e6d5c6107846f37cf16e9c7d090e1b97c491f"} Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.048756 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.099979 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"49267abf-7f15-4460-bbc4-d7b0cc162817\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.100349 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"49267abf-7f15-4460-bbc4-d7b0cc162817\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.100978 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"49267abf-7f15-4460-bbc4-d7b0cc162817\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.101633 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume" (OuterVolumeSpecName: "config-volume") pod "49267abf-7f15-4460-bbc4-d7b0cc162817" (UID: "49267abf-7f15-4460-bbc4-d7b0cc162817"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.102018 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.117452 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49267abf-7f15-4460-bbc4-d7b0cc162817" (UID: "49267abf-7f15-4460-bbc4-d7b0cc162817"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.117543 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc" (OuterVolumeSpecName: "kube-api-access-sxdvc") pod "49267abf-7f15-4460-bbc4-d7b0cc162817" (UID: "49267abf-7f15-4460-bbc4-d7b0cc162817"). InnerVolumeSpecName "kube-api-access-sxdvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.203441 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.203493 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.686431 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" event={"ID":"49267abf-7f15-4460-bbc4-d7b0cc162817","Type":"ContainerDied","Data":"e9a23527cd890dbeb14872b2ce5e6d5c6107846f37cf16e9c7d090e1b97c491f"} Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.686465 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.686475 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a23527cd890dbeb14872b2ce5e6d5c6107846f37cf16e9c7d090e1b97c491f" Feb 02 11:15:04 crc kubenswrapper[4782]: I0202 11:15:04.143223 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 11:15:04 crc kubenswrapper[4782]: I0202 11:15:04.152312 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 11:15:04 crc kubenswrapper[4782]: I0202 11:15:04.831974 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" path="/var/lib/kubelet/pods/9832aa65-d498-4a21-b53a-ebc591328a00/volumes" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.951541 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.952238 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.952293 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.953657 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.953725 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296" gracePeriod=600 Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.837960 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296" exitCode=0 Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.838036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296"} Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.838304 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9"} Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.838325 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:15:50 crc kubenswrapper[4782]: I0202 11:15:50.813801 4782 scope.go:117] "RemoveContainer" containerID="b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.866247 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:18 crc kubenswrapper[4782]: E0202 11:16:18.867242 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerName="collect-profiles" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.867258 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerName="collect-profiles" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.867500 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerName="collect-profiles" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.869042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.882934 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.909752 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.910085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.910290 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.011723 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.011812 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.011915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.012290 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.012487 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.031764 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.186839 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.562663 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:20 crc kubenswrapper[4782]: I0202 11:16:20.305459 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" exitCode=0 Feb 02 11:16:20 crc kubenswrapper[4782]: I0202 11:16:20.305520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687"} Feb 02 11:16:20 crc kubenswrapper[4782]: I0202 11:16:20.306125 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerStarted","Data":"ecf6ff51b42e9000b6e16920f1a53fa2716549df5b3034607af134a5eda026c3"} Feb 02 11:16:21 crc kubenswrapper[4782]: I0202 11:16:21.315266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerStarted","Data":"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206"} Feb 02 11:16:22 crc kubenswrapper[4782]: I0202 11:16:22.323814 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" exitCode=0 Feb 02 11:16:22 crc kubenswrapper[4782]: I0202 11:16:22.323852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206"} Feb 02 11:16:23 crc kubenswrapper[4782]: I0202 11:16:23.333426 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerStarted","Data":"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387"} Feb 02 11:16:23 crc kubenswrapper[4782]: I0202 11:16:23.356078 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dkcrj" podStartSLOduration=2.938981054 podStartE2EDuration="5.356059331s" podCreationTimestamp="2026-02-02 11:16:18 +0000 UTC" firstStartedPulling="2026-02-02 11:16:20.307665073 +0000 UTC m=+2260.191857789" lastFinishedPulling="2026-02-02 11:16:22.72474335 +0000 UTC m=+2262.608936066" observedRunningTime="2026-02-02 11:16:23.354842456 +0000 UTC m=+2263.239035192" watchObservedRunningTime="2026-02-02 11:16:23.356059331 +0000 UTC m=+2263.240252057" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.250889 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.252875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.275341 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.308224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.308350 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.308460 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.409961 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.439286 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.578248 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.067342 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:25 crc kubenswrapper[4782]: W0202 11:16:25.080290 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8fb06dc_1bfc_4b37_a62e_9ebe2b22ae27.slice/crio-211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a WatchSource:0}: Error finding container 211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a: Status 404 returned error can't find the container with id 211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.352046 4782 generic.go:334] "Generic (PLEG): container finished" podID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" exitCode=0 Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.352119 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3"} Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.352324 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerStarted","Data":"211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a"} Feb 02 11:16:27 crc kubenswrapper[4782]: I0202 11:16:27.381405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerStarted","Data":"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1"} Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.189870 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.190171 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.237240 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.397863 4782 generic.go:334] "Generic (PLEG): container finished" podID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" exitCode=0 Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.397929 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1"} Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.443726 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:30 crc kubenswrapper[4782]: I0202 11:16:30.408305 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerStarted","Data":"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776"} Feb 02 11:16:30 crc kubenswrapper[4782]: I0202 11:16:30.426529 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kb292" podStartSLOduration=1.764434789 podStartE2EDuration="6.426504239s" podCreationTimestamp="2026-02-02 11:16:24 +0000 UTC" firstStartedPulling="2026-02-02 11:16:25.353979078 +0000 UTC m=+2265.238171794" lastFinishedPulling="2026-02-02 11:16:30.016048528 +0000 UTC m=+2269.900241244" observedRunningTime="2026-02-02 11:16:30.42410551 +0000 UTC m=+2270.308298226" watchObservedRunningTime="2026-02-02 11:16:30.426504239 +0000 UTC m=+2270.310696955" Feb 02 11:16:31 crc kubenswrapper[4782]: I0202 11:16:31.838882 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:31 crc kubenswrapper[4782]: I0202 11:16:31.839123 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dkcrj" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" containerID="cri-o://49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" gracePeriod=2 Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.297673 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.310345 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.310573 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.310621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.311287 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities" (OuterVolumeSpecName: "utilities") pod "f4ca49fd-6a65-44d2-9733-5dd64b0c0552" (UID: "f4ca49fd-6a65-44d2-9733-5dd64b0c0552"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.311837 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.317337 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck" (OuterVolumeSpecName: "kube-api-access-vftck") pod "f4ca49fd-6a65-44d2-9733-5dd64b0c0552" (UID: "f4ca49fd-6a65-44d2-9733-5dd64b0c0552"). InnerVolumeSpecName "kube-api-access-vftck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.361737 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4ca49fd-6a65-44d2-9733-5dd64b0c0552" (UID: "f4ca49fd-6a65-44d2-9733-5dd64b0c0552"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.413835 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.413874 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424843 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" exitCode=0 Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424892 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387"} Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424917 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"ecf6ff51b42e9000b6e16920f1a53fa2716549df5b3034607af134a5eda026c3"} Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424936 4782 scope.go:117] "RemoveContainer" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.425053 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.448147 4782 scope.go:117] "RemoveContainer" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.472743 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.473656 4782 scope.go:117] "RemoveContainer" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.480821 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.507945 4782 scope.go:117] "RemoveContainer" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" Feb 02 11:16:32 crc kubenswrapper[4782]: E0202 11:16:32.509071 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387\": container with ID starting with 49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387 not found: ID does not exist" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509101 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387"} err="failed to get container status \"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387\": rpc error: code = NotFound desc = could not find container \"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387\": container with ID starting with 49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387 not found: ID does not exist" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509123 4782 scope.go:117] "RemoveContainer" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" Feb 02 11:16:32 crc kubenswrapper[4782]: E0202 11:16:32.509549 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206\": container with ID starting with ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206 not found: ID does not exist" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509680 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206"} err="failed to get container status \"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206\": rpc error: code = NotFound desc = could not find container \"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206\": container with ID starting with ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206 not found: ID does not exist" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509790 4782 scope.go:117] "RemoveContainer" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" Feb 02 11:16:32 crc kubenswrapper[4782]: E0202 11:16:32.511266 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687\": container with ID starting with 4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687 not found: ID does not exist" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.511366 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687"} err="failed to get container status \"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687\": rpc error: code = NotFound desc = could not find container \"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687\": container with ID starting with 4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687 not found: ID does not exist" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.830402 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" path="/var/lib/kubelet/pods/f4ca49fd-6a65-44d2-9733-5dd64b0c0552/volumes" Feb 02 11:16:33 crc kubenswrapper[4782]: I0202 11:16:33.438879 4782 generic.go:334] "Generic (PLEG): container finished" podID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerID="69de4465f107269ffd74eebaf2f980c3701cfb9aad1cfbb6b352c4678c7d6844" exitCode=0 Feb 02 11:16:33 crc kubenswrapper[4782]: I0202 11:16:33.438955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerDied","Data":"69de4465f107269ffd74eebaf2f980c3701cfb9aad1cfbb6b352c4678c7d6844"} Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.579846 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.580118 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.628228 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.822089 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.874874 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875056 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875432 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875463 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875489 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.881843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq" (OuterVolumeSpecName: "kube-api-access-hggwq") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "kube-api-access-hggwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.882097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph" (OuterVolumeSpecName: "ceph") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.885831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.900738 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory" (OuterVolumeSpecName: "inventory") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.906256 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977415 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977457 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977470 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977483 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977494 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.455236 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerDied","Data":"05f3f00be72c9cbb26f985a3b0b234e1373a612190801d2ad37e870a76ce2098"} Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.455286 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f3f00be72c9cbb26f985a3b0b234e1373a612190801d2ad37e870a76ce2098" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.455259 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.503231 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.559522 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf"] Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.560530 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.560628 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.560870 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-content" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.560945 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-content" Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.561224 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.561431 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.561521 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-utilities" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.561586 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-utilities" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.561906 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.562012 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.562704 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565047 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565375 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565516 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565690 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.567012 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.572538 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf"] Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595153 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595241 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595268 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595305 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.697691 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.697858 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.697909 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.698039 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.702230 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.702276 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.703691 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.726968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.895415 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:36 crc kubenswrapper[4782]: I0202 11:16:36.406957 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf"] Feb 02 11:16:36 crc kubenswrapper[4782]: I0202 11:16:36.464870 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerStarted","Data":"197e8b1584d4103966d35029a15423b0fb273cdc870eef66ede372a94ca19367"} Feb 02 11:16:36 crc kubenswrapper[4782]: I0202 11:16:36.642207 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.472545 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kb292" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" containerID="cri-o://d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" gracePeriod=2 Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.473714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerStarted","Data":"36de3889878e544ddd04f0431bda4177dd543f991b4c87a9741668b0c02aa32c"} Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.501519 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" podStartSLOduration=1.953761903 podStartE2EDuration="2.501496777s" podCreationTimestamp="2026-02-02 11:16:35 +0000 UTC" firstStartedPulling="2026-02-02 11:16:36.417379106 +0000 UTC m=+2276.301571822" lastFinishedPulling="2026-02-02 11:16:36.96511398 +0000 UTC m=+2276.849306696" observedRunningTime="2026-02-02 11:16:37.489185963 +0000 UTC m=+2277.373378689" watchObservedRunningTime="2026-02-02 11:16:37.501496777 +0000 UTC m=+2277.385689513" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.874245 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.946896 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.946942 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.947119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.948084 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities" (OuterVolumeSpecName: "utilities") pod "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" (UID: "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.952910 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g" (OuterVolumeSpecName: "kube-api-access-nhd8g") pod "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" (UID: "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27"). InnerVolumeSpecName "kube-api-access-nhd8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.972305 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" (UID: "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.049242 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.049273 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.049284 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.483892 4782 generic.go:334] "Generic (PLEG): container finished" podID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" exitCode=0 Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.483977 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.483965 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776"} Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.484919 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a"} Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.484964 4782 scope.go:117] "RemoveContainer" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.514856 4782 scope.go:117] "RemoveContainer" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.525319 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.536919 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.547745 4782 scope.go:117] "RemoveContainer" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.581263 4782 scope.go:117] "RemoveContainer" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" Feb 02 11:16:38 crc kubenswrapper[4782]: E0202 11:16:38.581872 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776\": container with ID starting with d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776 not found: ID does not exist" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.581925 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776"} err="failed to get container status \"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776\": rpc error: code = NotFound desc = could not find container \"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776\": container with ID starting with d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776 not found: ID does not exist" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.581964 4782 scope.go:117] "RemoveContainer" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" Feb 02 11:16:38 crc kubenswrapper[4782]: E0202 11:16:38.582257 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1\": container with ID starting with 10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1 not found: ID does not exist" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.582286 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1"} err="failed to get container status \"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1\": rpc error: code = NotFound desc = could not find container \"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1\": container with ID starting with 10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1 not found: ID does not exist" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.582304 4782 scope.go:117] "RemoveContainer" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" Feb 02 11:16:38 crc kubenswrapper[4782]: E0202 11:16:38.582651 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3\": container with ID starting with 1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3 not found: ID does not exist" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.582689 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3"} err="failed to get container status \"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3\": rpc error: code = NotFound desc = could not find container \"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3\": container with ID starting with 1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3 not found: ID does not exist" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.833092 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" path="/var/lib/kubelet/pods/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27/volumes" Feb 02 11:17:02 crc kubenswrapper[4782]: I0202 11:17:02.702927 4782 generic.go:334] "Generic (PLEG): container finished" podID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerID="36de3889878e544ddd04f0431bda4177dd543f991b4c87a9741668b0c02aa32c" exitCode=0 Feb 02 11:17:02 crc kubenswrapper[4782]: I0202 11:17:02.702998 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerDied","Data":"36de3889878e544ddd04f0431bda4177dd543f991b4c87a9741668b0c02aa32c"} Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.138946 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.228983 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.229156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.229180 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.229217 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.235993 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph" (OuterVolumeSpecName: "ceph") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.239065 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85" (OuterVolumeSpecName: "kube-api-access-4dt85") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "kube-api-access-4dt85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.263521 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.277417 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory" (OuterVolumeSpecName: "inventory") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.330894 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.331137 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.331234 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.331301 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.718770 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerDied","Data":"197e8b1584d4103966d35029a15423b0fb273cdc870eef66ede372a94ca19367"} Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.719047 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="197e8b1584d4103966d35029a15423b0fb273cdc870eef66ede372a94ca19367" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.718860 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909272 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr"] Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909735 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-utilities" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909756 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-utilities" Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909787 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-content" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909797 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-content" Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909809 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909821 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909841 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909849 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.910073 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.910103 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.910926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.913980 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915190 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915220 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915441 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915471 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.924606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr"] Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.043569 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.043892 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.044236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.044465 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.145920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.146012 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.146081 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.146139 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.150262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.150599 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.153153 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.166360 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.226751 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.749285 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr"] Feb 02 11:17:06 crc kubenswrapper[4782]: I0202 11:17:06.733435 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerStarted","Data":"2235833b85f2f95e8bedc81df9f5556debf3ebc3dc20a3681176ccdf6b9e1069"} Feb 02 11:17:06 crc kubenswrapper[4782]: I0202 11:17:06.734583 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerStarted","Data":"02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513"} Feb 02 11:17:06 crc kubenswrapper[4782]: I0202 11:17:06.752007 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" podStartSLOduration=2.170299413 podStartE2EDuration="2.751984765s" podCreationTimestamp="2026-02-02 11:17:04 +0000 UTC" firstStartedPulling="2026-02-02 11:17:05.755726954 +0000 UTC m=+2305.639919670" lastFinishedPulling="2026-02-02 11:17:06.337412306 +0000 UTC m=+2306.221605022" observedRunningTime="2026-02-02 11:17:06.749572536 +0000 UTC m=+2306.633765252" watchObservedRunningTime="2026-02-02 11:17:06.751984765 +0000 UTC m=+2306.636177481" Feb 02 11:17:11 crc kubenswrapper[4782]: I0202 11:17:11.770881 4782 generic.go:334] "Generic (PLEG): container finished" podID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerID="2235833b85f2f95e8bedc81df9f5556debf3ebc3dc20a3681176ccdf6b9e1069" exitCode=0 Feb 02 11:17:11 crc kubenswrapper[4782]: I0202 11:17:11.771134 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerDied","Data":"2235833b85f2f95e8bedc81df9f5556debf3ebc3dc20a3681176ccdf6b9e1069"} Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.172722 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294579 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294676 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294774 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294836 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.300428 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph" (OuterVolumeSpecName: "ceph") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.300522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng" (OuterVolumeSpecName: "kube-api-access-7gzng") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "kube-api-access-7gzng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.321722 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory" (OuterVolumeSpecName: "inventory") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.329083 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398006 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398036 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398053 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398065 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.787664 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerDied","Data":"02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513"} Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.787702 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.787726 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.912364 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png"] Feb 02 11:17:13 crc kubenswrapper[4782]: E0202 11:17:13.912778 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.912794 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.912945 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.915465 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919206 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919230 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919353 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919416 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919571 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.927096 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png"] Feb 02 11:17:13 crc kubenswrapper[4782]: E0202 11:17:13.975802 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fa384d_760c_4c0a_b58f_91a876eeb3d7.slice/crio-02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fa384d_760c_4c0a_b58f_91a876eeb3d7.slice\": RecentStats: unable to find data in memory cache]" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.011257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.012604 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.012872 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.013084 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.115282 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.115668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.115865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.116108 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.122566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.123161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.125125 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.144454 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.240151 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.761619 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png"] Feb 02 11:17:14 crc kubenswrapper[4782]: W0202 11:17:14.769603 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbf31fe9_54a8_4cc8_b0ef_a8076cf87c52.slice/crio-7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b WatchSource:0}: Error finding container 7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b: Status 404 returned error can't find the container with id 7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.796222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerStarted","Data":"7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b"} Feb 02 11:17:15 crc kubenswrapper[4782]: I0202 11:17:15.806588 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerStarted","Data":"199f5b114440d957ca82b1b3791c3a0c06061529752ed8afc79d07dd12184ea0"} Feb 02 11:17:15 crc kubenswrapper[4782]: I0202 11:17:15.829688 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" podStartSLOduration=2.045590767 podStartE2EDuration="2.829673258s" podCreationTimestamp="2026-02-02 11:17:13 +0000 UTC" firstStartedPulling="2026-02-02 11:17:14.771810043 +0000 UTC m=+2314.656002759" lastFinishedPulling="2026-02-02 11:17:15.555892534 +0000 UTC m=+2315.440085250" observedRunningTime="2026-02-02 11:17:15.824791528 +0000 UTC m=+2315.708984254" watchObservedRunningTime="2026-02-02 11:17:15.829673258 +0000 UTC m=+2315.713865974" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.831692 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.833784 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.846392 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.954701 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.954814 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.954927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.056359 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.056428 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.056487 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.057148 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.057229 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.076775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.154480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.806677 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.913889 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerStarted","Data":"a8fcd57b87696b80c8320fe55d90ef869a6fdc47f8b986887188a39b5ad2620b"} Feb 02 11:17:28 crc kubenswrapper[4782]: I0202 11:17:28.924468 4782 generic.go:334] "Generic (PLEG): container finished" podID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" exitCode=0 Feb 02 11:17:28 crc kubenswrapper[4782]: I0202 11:17:28.924526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3"} Feb 02 11:17:29 crc kubenswrapper[4782]: I0202 11:17:29.933810 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerStarted","Data":"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531"} Feb 02 11:17:31 crc kubenswrapper[4782]: I0202 11:17:31.949118 4782 generic.go:334] "Generic (PLEG): container finished" podID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" exitCode=0 Feb 02 11:17:31 crc kubenswrapper[4782]: I0202 11:17:31.949237 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531"} Feb 02 11:17:32 crc kubenswrapper[4782]: I0202 11:17:32.962697 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerStarted","Data":"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b"} Feb 02 11:17:32 crc kubenswrapper[4782]: I0202 11:17:32.991737 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p8frt" podStartSLOduration=3.31381119 podStartE2EDuration="6.991718038s" podCreationTimestamp="2026-02-02 11:17:26 +0000 UTC" firstStartedPulling="2026-02-02 11:17:28.926221917 +0000 UTC m=+2328.810414633" lastFinishedPulling="2026-02-02 11:17:32.604128765 +0000 UTC m=+2332.488321481" observedRunningTime="2026-02-02 11:17:32.98417687 +0000 UTC m=+2332.868369606" watchObservedRunningTime="2026-02-02 11:17:32.991718038 +0000 UTC m=+2332.875910754" Feb 02 11:17:37 crc kubenswrapper[4782]: I0202 11:17:37.154854 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:37 crc kubenswrapper[4782]: I0202 11:17:37.155463 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:37 crc kubenswrapper[4782]: I0202 11:17:37.198036 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:38 crc kubenswrapper[4782]: I0202 11:17:38.050822 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:38 crc kubenswrapper[4782]: I0202 11:17:38.100290 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.017270 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p8frt" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" containerID="cri-o://aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" gracePeriod=2 Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.502161 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.611532 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.611729 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.611772 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.615285 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities" (OuterVolumeSpecName: "utilities") pod "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" (UID: "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.618788 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t" (OuterVolumeSpecName: "kube-api-access-d567t") pod "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" (UID: "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb"). InnerVolumeSpecName "kube-api-access-d567t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.678407 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" (UID: "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.713878 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.713916 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.713928 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.048936 4782 generic.go:334] "Generic (PLEG): container finished" podID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" exitCode=0 Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049249 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b"} Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049282 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"a8fcd57b87696b80c8320fe55d90ef869a6fdc47f8b986887188a39b5ad2620b"} Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049302 4782 scope.go:117] "RemoveContainer" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049501 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.085463 4782 scope.go:117] "RemoveContainer" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.087803 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.103694 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.112514 4782 scope.go:117] "RemoveContainer" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.152739 4782 scope.go:117] "RemoveContainer" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" Feb 02 11:17:41 crc kubenswrapper[4782]: E0202 11:17:41.153207 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b\": container with ID starting with aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b not found: ID does not exist" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153238 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b"} err="failed to get container status \"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b\": rpc error: code = NotFound desc = could not find container \"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b\": container with ID starting with aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b not found: ID does not exist" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153264 4782 scope.go:117] "RemoveContainer" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" Feb 02 11:17:41 crc kubenswrapper[4782]: E0202 11:17:41.153683 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531\": container with ID starting with 8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531 not found: ID does not exist" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153702 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531"} err="failed to get container status \"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531\": rpc error: code = NotFound desc = could not find container \"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531\": container with ID starting with 8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531 not found: ID does not exist" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153718 4782 scope.go:117] "RemoveContainer" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" Feb 02 11:17:41 crc kubenswrapper[4782]: E0202 11:17:41.153993 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3\": container with ID starting with 8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3 not found: ID does not exist" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.154012 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3"} err="failed to get container status \"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3\": rpc error: code = NotFound desc = could not find container \"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3\": container with ID starting with 8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3 not found: ID does not exist" Feb 02 11:17:42 crc kubenswrapper[4782]: I0202 11:17:42.833996 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" path="/var/lib/kubelet/pods/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb/volumes" Feb 02 11:17:52 crc kubenswrapper[4782]: I0202 11:17:52.951209 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:17:52 crc kubenswrapper[4782]: I0202 11:17:52.951773 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:17:54 crc kubenswrapper[4782]: I0202 11:17:54.158587 4782 generic.go:334] "Generic (PLEG): container finished" podID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerID="199f5b114440d957ca82b1b3791c3a0c06061529752ed8afc79d07dd12184ea0" exitCode=0 Feb 02 11:17:54 crc kubenswrapper[4782]: I0202 11:17:54.158630 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerDied","Data":"199f5b114440d957ca82b1b3791c3a0c06061529752ed8afc79d07dd12184ea0"} Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.555911 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.738873 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.739250 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.739275 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.739313 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.757136 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph" (OuterVolumeSpecName: "ceph") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.757184 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc" (OuterVolumeSpecName: "kube-api-access-5zlwc") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "kube-api-access-5zlwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.765283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory" (OuterVolumeSpecName: "inventory") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.767999 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841586 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841620 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841631 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841728 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.174745 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerDied","Data":"7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b"} Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.174784 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.174790 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.275934 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529"] Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276288 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276303 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276318 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-content" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276325 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-content" Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276376 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-utilities" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276383 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-utilities" Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276394 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276401 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276577 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276592 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.277214 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.280156 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.280780 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.281004 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.281075 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.281383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.294557 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529"] Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451663 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451739 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451923 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451975 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553718 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553787 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553863 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.566300 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.566299 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.566779 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.572352 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.597344 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:57 crc kubenswrapper[4782]: I0202 11:17:57.133258 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529"] Feb 02 11:17:57 crc kubenswrapper[4782]: I0202 11:17:57.187381 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerStarted","Data":"35c7e59b308ab8245d7d3efeca54f0472cb9852b42fc4b256469fa44da88e114"} Feb 02 11:17:58 crc kubenswrapper[4782]: I0202 11:17:58.208232 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerStarted","Data":"3ac45a3cb50d94aec6d4a24305c4a1990039d130ec326129bf072b5fba65c9a1"} Feb 02 11:17:58 crc kubenswrapper[4782]: I0202 11:17:58.232048 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" podStartSLOduration=1.7286526740000001 podStartE2EDuration="2.23202351s" podCreationTimestamp="2026-02-02 11:17:56 +0000 UTC" firstStartedPulling="2026-02-02 11:17:57.140809115 +0000 UTC m=+2357.025001831" lastFinishedPulling="2026-02-02 11:17:57.644179941 +0000 UTC m=+2357.528372667" observedRunningTime="2026-02-02 11:17:58.224163684 +0000 UTC m=+2358.108356420" watchObservedRunningTime="2026-02-02 11:17:58.23202351 +0000 UTC m=+2358.116216226" Feb 02 11:18:02 crc kubenswrapper[4782]: I0202 11:18:02.239384 4782 generic.go:334] "Generic (PLEG): container finished" podID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerID="3ac45a3cb50d94aec6d4a24305c4a1990039d130ec326129bf072b5fba65c9a1" exitCode=0 Feb 02 11:18:02 crc kubenswrapper[4782]: I0202 11:18:02.239461 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerDied","Data":"3ac45a3cb50d94aec6d4a24305c4a1990039d130ec326129bf072b5fba65c9a1"} Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.666383 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787049 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787128 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787351 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787438 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.796449 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph" (OuterVolumeSpecName: "ceph") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.796738 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b" (OuterVolumeSpecName: "kube-api-access-gcr5b") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "kube-api-access-gcr5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.818039 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory" (OuterVolumeSpecName: "inventory") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.820263 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890294 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890356 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890372 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890391 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.260856 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerDied","Data":"35c7e59b308ab8245d7d3efeca54f0472cb9852b42fc4b256469fa44da88e114"} Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.260936 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35c7e59b308ab8245d7d3efeca54f0472cb9852b42fc4b256469fa44da88e114" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.260903 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.406831 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg"] Feb 02 11:18:04 crc kubenswrapper[4782]: E0202 11:18:04.407221 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.407247 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.407428 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.408048 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.411901 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.412860 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.412880 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.412891 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.413077 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.434758 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg"] Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501281 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501381 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501423 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602551 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602673 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.607540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.610132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.620606 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.621399 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.725439 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:05 crc kubenswrapper[4782]: I0202 11:18:05.254724 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg"] Feb 02 11:18:05 crc kubenswrapper[4782]: I0202 11:18:05.272278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerStarted","Data":"6b039f7b5afb87f81b48a9a10a4024e191757e1ef276cd47214bf98018104fea"} Feb 02 11:18:07 crc kubenswrapper[4782]: I0202 11:18:07.295925 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerStarted","Data":"0073044f871dd4a25cbe4d73162049a859f0266fbb62237ebe19cbb7776e276f"} Feb 02 11:18:07 crc kubenswrapper[4782]: I0202 11:18:07.319507 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" podStartSLOduration=2.106537535 podStartE2EDuration="3.319478007s" podCreationTimestamp="2026-02-02 11:18:04 +0000 UTC" firstStartedPulling="2026-02-02 11:18:05.263013823 +0000 UTC m=+2365.147206539" lastFinishedPulling="2026-02-02 11:18:06.475954295 +0000 UTC m=+2366.360147011" observedRunningTime="2026-02-02 11:18:07.314781902 +0000 UTC m=+2367.198974618" watchObservedRunningTime="2026-02-02 11:18:07.319478007 +0000 UTC m=+2367.203670723" Feb 02 11:18:22 crc kubenswrapper[4782]: I0202 11:18:22.951493 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:18:22 crc kubenswrapper[4782]: I0202 11:18:22.952514 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:18:47 crc kubenswrapper[4782]: I0202 11:18:47.622878 4782 generic.go:334] "Generic (PLEG): container finished" podID="6dbc340f-2b20-49aa-8358-26223d367e34" containerID="0073044f871dd4a25cbe4d73162049a859f0266fbb62237ebe19cbb7776e276f" exitCode=0 Feb 02 11:18:47 crc kubenswrapper[4782]: I0202 11:18:47.622971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerDied","Data":"0073044f871dd4a25cbe4d73162049a859f0266fbb62237ebe19cbb7776e276f"} Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.279141 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359088 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359436 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359525 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359578 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.366080 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs" (OuterVolumeSpecName: "kube-api-access-shscs") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "kube-api-access-shscs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.382460 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph" (OuterVolumeSpecName: "ceph") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.385029 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.390471 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory" (OuterVolumeSpecName: "inventory") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.461983 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.462026 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.462038 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.462048 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.639236 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerDied","Data":"6b039f7b5afb87f81b48a9a10a4024e191757e1ef276cd47214bf98018104fea"} Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.639280 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b039f7b5afb87f81b48a9a10a4024e191757e1ef276cd47214bf98018104fea" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.639334 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.728169 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5858"] Feb 02 11:18:49 crc kubenswrapper[4782]: E0202 11:18:49.728519 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc340f-2b20-49aa-8358-26223d367e34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.728542 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc340f-2b20-49aa-8358-26223d367e34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.728702 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc340f-2b20-49aa-8358-26223d367e34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.729220 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.731088 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.731127 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.731441 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.732131 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.733388 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.747753 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5858"] Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892144 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892344 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892378 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994197 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994395 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994711 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:49.999977 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.000527 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.018573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.020084 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.101829 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.707342 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5858"] Feb 02 11:18:51 crc kubenswrapper[4782]: I0202 11:18:51.658159 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerStarted","Data":"d76ce0556a2fc0cf77f556e5511fd48a3975db4277a83cae1c4bc1a7fcfd2287"} Feb 02 11:18:51 crc kubenswrapper[4782]: I0202 11:18:51.658553 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerStarted","Data":"559b77b336739e3a20431c61a7983c9be76a44e129b7fd8382612c9058b5762c"} Feb 02 11:18:51 crc kubenswrapper[4782]: I0202 11:18:51.679944 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" podStartSLOduration=2.239430591 podStartE2EDuration="2.679925949s" podCreationTimestamp="2026-02-02 11:18:49 +0000 UTC" firstStartedPulling="2026-02-02 11:18:50.706798431 +0000 UTC m=+2410.590991147" lastFinishedPulling="2026-02-02 11:18:51.147293789 +0000 UTC m=+2411.031486505" observedRunningTime="2026-02-02 11:18:51.677750526 +0000 UTC m=+2411.561943252" watchObservedRunningTime="2026-02-02 11:18:51.679925949 +0000 UTC m=+2411.564118665" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.951051 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.951378 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.951427 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.952239 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.952289 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" gracePeriod=600 Feb 02 11:18:53 crc kubenswrapper[4782]: E0202 11:18:53.078470 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.676281 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" exitCode=0 Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.676333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9"} Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.676370 4782 scope.go:117] "RemoveContainer" containerID="f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296" Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.677031 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:18:53 crc kubenswrapper[4782]: E0202 11:18:53.677313 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:18:59 crc kubenswrapper[4782]: I0202 11:18:59.727985 4782 generic.go:334] "Generic (PLEG): container finished" podID="c80c4993-adf6-44f8-a084-21920191de7f" containerID="d76ce0556a2fc0cf77f556e5511fd48a3975db4277a83cae1c4bc1a7fcfd2287" exitCode=0 Feb 02 11:18:59 crc kubenswrapper[4782]: I0202 11:18:59.728084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerDied","Data":"d76ce0556a2fc0cf77f556e5511fd48a3975db4277a83cae1c4bc1a7fcfd2287"} Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.107686 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194527 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194628 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194659 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194702 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.199960 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg" (OuterVolumeSpecName: "kube-api-access-jrhdg") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "kube-api-access-jrhdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.200247 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph" (OuterVolumeSpecName: "ceph") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.224048 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.235815 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297056 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297093 4782 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297109 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297121 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.743522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerDied","Data":"559b77b336739e3a20431c61a7983c9be76a44e129b7fd8382612c9058b5762c"} Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.743563 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559b77b336739e3a20431c61a7983c9be76a44e129b7fd8382612c9058b5762c" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.743663 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.821141 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6"] Feb 02 11:19:01 crc kubenswrapper[4782]: E0202 11:19:01.821484 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80c4993-adf6-44f8-a084-21920191de7f" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.821500 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80c4993-adf6-44f8-a084-21920191de7f" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.821696 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80c4993-adf6-44f8-a084-21920191de7f" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.822236 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.824628 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.824834 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.825080 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.825222 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.825468 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.840508 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6"] Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907322 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907695 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.009243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.009617 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.010466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.010937 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.016601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.027583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.029381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.029926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.140803 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.666276 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6"] Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.671587 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.754410 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerStarted","Data":"d21aee011421776520f27241faf1344e53f0713b464c3544cc7018eb872d1635"} Feb 02 11:19:03 crc kubenswrapper[4782]: I0202 11:19:03.762955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerStarted","Data":"74053c80371c983af25f2d15b67b98314186781dec8007085ebf9e0dec406d75"} Feb 02 11:19:03 crc kubenswrapper[4782]: I0202 11:19:03.781576 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" podStartSLOduration=2.230938424 podStartE2EDuration="2.781553751s" podCreationTimestamp="2026-02-02 11:19:01 +0000 UTC" firstStartedPulling="2026-02-02 11:19:02.671318957 +0000 UTC m=+2422.555511673" lastFinishedPulling="2026-02-02 11:19:03.221934284 +0000 UTC m=+2423.106127000" observedRunningTime="2026-02-02 11:19:03.778556935 +0000 UTC m=+2423.662749661" watchObservedRunningTime="2026-02-02 11:19:03.781553751 +0000 UTC m=+2423.665746467" Feb 02 11:19:07 crc kubenswrapper[4782]: I0202 11:19:07.822403 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:07 crc kubenswrapper[4782]: E0202 11:19:07.823059 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:10 crc kubenswrapper[4782]: I0202 11:19:10.817426 4782 generic.go:334] "Generic (PLEG): container finished" podID="e25dd29c-ad04-40c3-a682-352af21186fe" containerID="74053c80371c983af25f2d15b67b98314186781dec8007085ebf9e0dec406d75" exitCode=0 Feb 02 11:19:10 crc kubenswrapper[4782]: I0202 11:19:10.817598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerDied","Data":"74053c80371c983af25f2d15b67b98314186781dec8007085ebf9e0dec406d75"} Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.277374 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.436950 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.437128 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.437189 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.437214 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.442698 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph" (OuterVolumeSpecName: "ceph") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.443414 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99" (OuterVolumeSpecName: "kube-api-access-46x99") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "kube-api-access-46x99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.473192 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory" (OuterVolumeSpecName: "inventory") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.474862 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.539555 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.539918 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.540010 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.540093 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.844396 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerDied","Data":"d21aee011421776520f27241faf1344e53f0713b464c3544cc7018eb872d1635"} Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.844852 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21aee011421776520f27241faf1344e53f0713b464c3544cc7018eb872d1635" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.844887 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.005962 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6"] Feb 02 11:19:13 crc kubenswrapper[4782]: E0202 11:19:13.006317 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25dd29c-ad04-40c3-a682-352af21186fe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.006333 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25dd29c-ad04-40c3-a682-352af21186fe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.006534 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25dd29c-ad04-40c3-a682-352af21186fe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.007245 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018158 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018212 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018304 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018489 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.019187 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.019395 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6"] Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.155609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.155675 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.156014 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.156091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258366 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258424 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258487 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.268384 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.268384 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.268815 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.287071 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.331883 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.887278 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6"] Feb 02 11:19:14 crc kubenswrapper[4782]: I0202 11:19:14.865466 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerStarted","Data":"26b113f966c08a0ed30f4ab74c4b07f22575994e2bd68e638614a034657ae012"} Feb 02 11:19:14 crc kubenswrapper[4782]: I0202 11:19:14.866067 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerStarted","Data":"8ba29fe52f1f75d05e90d7384ce81ec580523b328891b7732679d01354979b82"} Feb 02 11:19:14 crc kubenswrapper[4782]: I0202 11:19:14.882082 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" podStartSLOduration=2.289985738 podStartE2EDuration="2.882065249s" podCreationTimestamp="2026-02-02 11:19:12 +0000 UTC" firstStartedPulling="2026-02-02 11:19:13.894727312 +0000 UTC m=+2433.778920028" lastFinishedPulling="2026-02-02 11:19:14.486806833 +0000 UTC m=+2434.370999539" observedRunningTime="2026-02-02 11:19:14.88138192 +0000 UTC m=+2434.765574636" watchObservedRunningTime="2026-02-02 11:19:14.882065249 +0000 UTC m=+2434.766257965" Feb 02 11:19:20 crc kubenswrapper[4782]: I0202 11:19:20.827787 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:20 crc kubenswrapper[4782]: E0202 11:19:20.829004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:23 crc kubenswrapper[4782]: I0202 11:19:23.932605 4782 generic.go:334] "Generic (PLEG): container finished" podID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerID="26b113f966c08a0ed30f4ab74c4b07f22575994e2bd68e638614a034657ae012" exitCode=0 Feb 02 11:19:23 crc kubenswrapper[4782]: I0202 11:19:23.932678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerDied","Data":"26b113f966c08a0ed30f4ab74c4b07f22575994e2bd68e638614a034657ae012"} Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.308333 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387625 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387757 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387916 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387962 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.399845 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph" (OuterVolumeSpecName: "ceph") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.400036 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw" (OuterVolumeSpecName: "kube-api-access-czmtw") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "kube-api-access-czmtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.417150 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory" (OuterVolumeSpecName: "inventory") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.422620 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490376 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490441 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490462 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490474 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.948568 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerDied","Data":"8ba29fe52f1f75d05e90d7384ce81ec580523b328891b7732679d01354979b82"} Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.948877 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba29fe52f1f75d05e90d7384ce81ec580523b328891b7732679d01354979b82" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.948617 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.042448 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96"] Feb 02 11:19:26 crc kubenswrapper[4782]: E0202 11:19:26.043944 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.043969 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.044155 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.045188 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.048820 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.049518 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051420 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051601 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051725 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051899 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051906 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.052380 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.076461 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96"] Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.202564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.202613 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.202936 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203163 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203264 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203367 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203405 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203518 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203651 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305817 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305862 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306023 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306079 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306099 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306125 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306184 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.312203 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.313852 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.316341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.318542 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.318808 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319611 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319904 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319909 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.321180 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.321188 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.324729 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.325021 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.364493 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.877325 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96"] Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.959924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerStarted","Data":"e508ea928e1afeacf63b3b4096d6a89ed38e56b8b7c800b3a69b6eb8ab4fdf4e"} Feb 02 11:19:28 crc kubenswrapper[4782]: I0202 11:19:28.978166 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerStarted","Data":"98ca67596d9a2cdbd4c2ea4268ddb6b30d3462bd3ba6c9c09fa209503619836c"} Feb 02 11:19:29 crc kubenswrapper[4782]: I0202 11:19:29.003118 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" podStartSLOduration=2.016322773 podStartE2EDuration="3.003093804s" podCreationTimestamp="2026-02-02 11:19:26 +0000 UTC" firstStartedPulling="2026-02-02 11:19:26.885939629 +0000 UTC m=+2446.770132345" lastFinishedPulling="2026-02-02 11:19:27.87271065 +0000 UTC m=+2447.756903376" observedRunningTime="2026-02-02 11:19:28.998827921 +0000 UTC m=+2448.883020637" watchObservedRunningTime="2026-02-02 11:19:29.003093804 +0000 UTC m=+2448.887286540" Feb 02 11:19:35 crc kubenswrapper[4782]: I0202 11:19:35.821285 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:35 crc kubenswrapper[4782]: E0202 11:19:35.822100 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:47 crc kubenswrapper[4782]: I0202 11:19:47.821724 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:47 crc kubenswrapper[4782]: E0202 11:19:47.822541 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:57 crc kubenswrapper[4782]: I0202 11:19:57.188843 4782 generic.go:334] "Generic (PLEG): container finished" podID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerID="98ca67596d9a2cdbd4c2ea4268ddb6b30d3462bd3ba6c9c09fa209503619836c" exitCode=0 Feb 02 11:19:57 crc kubenswrapper[4782]: I0202 11:19:57.188922 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerDied","Data":"98ca67596d9a2cdbd4c2ea4268ddb6b30d3462bd3ba6c9c09fa209503619836c"} Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.607260 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.768158 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.768225 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769091 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769136 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769153 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769176 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769205 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769329 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769350 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769373 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769485 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.775417 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.775565 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.776231 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.776281 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.777851 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.778446 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.778530 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc" (OuterVolumeSpecName: "kube-api-access-b6npc") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "kube-api-access-b6npc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.779160 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.780872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.781994 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph" (OuterVolumeSpecName: "ceph") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.803372 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.802604 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.827300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory" (OuterVolumeSpecName: "inventory") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871615 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871669 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871680 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871690 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871699 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871708 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871718 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871728 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871738 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871746 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871755 4782 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871763 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871775 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.207100 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerDied","Data":"e508ea928e1afeacf63b3b4096d6a89ed38e56b8b7c800b3a69b6eb8ab4fdf4e"} Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.207133 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e508ea928e1afeacf63b3b4096d6a89ed38e56b8b7c800b3a69b6eb8ab4fdf4e" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.207141 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.449286 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb"] Feb 02 11:19:59 crc kubenswrapper[4782]: E0202 11:19:59.449749 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.449773 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.449968 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.450741 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454038 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454444 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454733 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454753 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.462264 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.480896 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb"] Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481039 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481097 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481210 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481256 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.582935 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.583191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.583306 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.583469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.591415 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.591442 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.594103 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.617356 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.765662 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:20:00 crc kubenswrapper[4782]: I0202 11:20:00.307481 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb"] Feb 02 11:20:01 crc kubenswrapper[4782]: I0202 11:20:01.232292 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerStarted","Data":"b1b912997a84ded9491b2639ec1a478ecd95835ed273174c31ee28a90b396db3"} Feb 02 11:20:02 crc kubenswrapper[4782]: I0202 11:20:02.244227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerStarted","Data":"f3fb12fed1ebe9e4eb9525d8eeb5e45a8ffb7c55abf9d79d36f5b948b06bec47"} Feb 02 11:20:02 crc kubenswrapper[4782]: I0202 11:20:02.821878 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:02 crc kubenswrapper[4782]: E0202 11:20:02.822136 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:07 crc kubenswrapper[4782]: I0202 11:20:07.288377 4782 generic.go:334] "Generic (PLEG): container finished" podID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerID="f3fb12fed1ebe9e4eb9525d8eeb5e45a8ffb7c55abf9d79d36f5b948b06bec47" exitCode=0 Feb 02 11:20:07 crc kubenswrapper[4782]: I0202 11:20:07.288612 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerDied","Data":"f3fb12fed1ebe9e4eb9525d8eeb5e45a8ffb7c55abf9d79d36f5b948b06bec47"} Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.755762 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874095 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874317 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874393 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874501 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.948881 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph" (OuterVolumeSpecName: "ceph") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.949931 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc" (OuterVolumeSpecName: "kube-api-access-9flfc") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "kube-api-access-9flfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.954495 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory" (OuterVolumeSpecName: "inventory") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.958065 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996311 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996351 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996362 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996374 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.309479 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerDied","Data":"b1b912997a84ded9491b2639ec1a478ecd95835ed273174c31ee28a90b396db3"} Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.309529 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b912997a84ded9491b2639ec1a478ecd95835ed273174c31ee28a90b396db3" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.309583 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.416514 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6"] Feb 02 11:20:09 crc kubenswrapper[4782]: E0202 11:20:09.416952 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.416972 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.417173 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.417910 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.419658 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.420104 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.421085 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.421257 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.423426 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.427254 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.434828 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6"] Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.605934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.605999 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606038 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606099 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606190 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606209 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.707997 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708198 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.709412 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.715163 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.715510 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.716383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.727633 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.737621 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.740945 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:10 crc kubenswrapper[4782]: I0202 11:20:10.161432 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6"] Feb 02 11:20:10 crc kubenswrapper[4782]: I0202 11:20:10.321792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerStarted","Data":"72544a004c2f1da86f062d201fe9dd1dd044fb710909d57929aa10ec1c1eafcb"} Feb 02 11:20:11 crc kubenswrapper[4782]: I0202 11:20:11.333486 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerStarted","Data":"acc292f462d713c39fcd55399a1ba18c9c5c3f1db54d2761b6d464dccea5645f"} Feb 02 11:20:11 crc kubenswrapper[4782]: I0202 11:20:11.360203 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" podStartSLOduration=1.7983493799999999 podStartE2EDuration="2.36016783s" podCreationTimestamp="2026-02-02 11:20:09 +0000 UTC" firstStartedPulling="2026-02-02 11:20:10.174999309 +0000 UTC m=+2490.059192025" lastFinishedPulling="2026-02-02 11:20:10.736817759 +0000 UTC m=+2490.621010475" observedRunningTime="2026-02-02 11:20:11.356835654 +0000 UTC m=+2491.241028370" watchObservedRunningTime="2026-02-02 11:20:11.36016783 +0000 UTC m=+2491.244360536" Feb 02 11:20:14 crc kubenswrapper[4782]: I0202 11:20:14.821418 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:14 crc kubenswrapper[4782]: E0202 11:20:14.822428 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:26 crc kubenswrapper[4782]: I0202 11:20:26.822236 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:26 crc kubenswrapper[4782]: E0202 11:20:26.823463 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:41 crc kubenswrapper[4782]: I0202 11:20:41.821798 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:41 crc kubenswrapper[4782]: E0202 11:20:41.822863 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:55 crc kubenswrapper[4782]: I0202 11:20:55.821996 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:55 crc kubenswrapper[4782]: E0202 11:20:55.822841 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:10 crc kubenswrapper[4782]: I0202 11:21:10.826178 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:10 crc kubenswrapper[4782]: E0202 11:21:10.826926 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:23 crc kubenswrapper[4782]: I0202 11:21:23.821807 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:23 crc kubenswrapper[4782]: E0202 11:21:23.822942 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:23 crc kubenswrapper[4782]: I0202 11:21:23.994437 4782 generic.go:334] "Generic (PLEG): container finished" podID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerID="acc292f462d713c39fcd55399a1ba18c9c5c3f1db54d2761b6d464dccea5645f" exitCode=0 Feb 02 11:21:23 crc kubenswrapper[4782]: I0202 11:21:23.994487 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerDied","Data":"acc292f462d713c39fcd55399a1ba18c9c5c3f1db54d2761b6d464dccea5645f"} Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.374512 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.486936 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.486999 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487074 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487113 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487248 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.492966 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph" (OuterVolumeSpecName: "ceph") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.493222 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2" (OuterVolumeSpecName: "kube-api-access-98st2") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "kube-api-access-98st2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.500479 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.515559 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.517696 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.518336 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory" (OuterVolumeSpecName: "inventory") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589242 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589278 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589289 4782 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589300 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589308 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589316 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.012890 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerDied","Data":"72544a004c2f1da86f062d201fe9dd1dd044fb710909d57929aa10ec1c1eafcb"} Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.012928 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.012932 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72544a004c2f1da86f062d201fe9dd1dd044fb710909d57929aa10ec1c1eafcb" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.219769 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7"] Feb 02 11:21:26 crc kubenswrapper[4782]: E0202 11:21:26.220201 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.220221 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.220420 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.222438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.224862 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226584 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226615 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226672 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226590 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226865 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.235126 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.245818 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7"] Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.308718 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.308799 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.308947 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309053 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309145 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309233 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410505 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410922 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410949 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410976 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.411013 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.411043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.411073 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.414371 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.414871 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.415279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.417525 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.418126 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.421556 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.433566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.544271 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:27 crc kubenswrapper[4782]: I0202 11:21:27.116581 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7"] Feb 02 11:21:28 crc kubenswrapper[4782]: I0202 11:21:28.030773 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerStarted","Data":"5b4f07ff15cbaace5049bb3eff108520caa8f61961e415e66aa5a801b9e5887e"} Feb 02 11:21:29 crc kubenswrapper[4782]: I0202 11:21:29.040432 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerStarted","Data":"c972dc47d6157e350c05293015a8979dd3252340d3b9b01b92f1e1cd1f5ff0df"} Feb 02 11:21:29 crc kubenswrapper[4782]: I0202 11:21:29.060430 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" podStartSLOduration=2.122294764 podStartE2EDuration="3.060410494s" podCreationTimestamp="2026-02-02 11:21:26 +0000 UTC" firstStartedPulling="2026-02-02 11:21:27.12290253 +0000 UTC m=+2567.007095246" lastFinishedPulling="2026-02-02 11:21:28.06101826 +0000 UTC m=+2567.945210976" observedRunningTime="2026-02-02 11:21:29.054070241 +0000 UTC m=+2568.938262957" watchObservedRunningTime="2026-02-02 11:21:29.060410494 +0000 UTC m=+2568.944603210" Feb 02 11:21:38 crc kubenswrapper[4782]: I0202 11:21:38.821583 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:38 crc kubenswrapper[4782]: E0202 11:21:38.822439 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:53 crc kubenswrapper[4782]: I0202 11:21:53.824473 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:53 crc kubenswrapper[4782]: E0202 11:21:53.825532 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:05 crc kubenswrapper[4782]: I0202 11:22:05.821173 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:05 crc kubenswrapper[4782]: E0202 11:22:05.822069 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:20 crc kubenswrapper[4782]: I0202 11:22:20.827326 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:20 crc kubenswrapper[4782]: E0202 11:22:20.828235 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:23 crc kubenswrapper[4782]: I0202 11:22:23.488777 4782 generic.go:334] "Generic (PLEG): container finished" podID="e6849945-28f4-4218-97c1-6047c2d0c368" containerID="c972dc47d6157e350c05293015a8979dd3252340d3b9b01b92f1e1cd1f5ff0df" exitCode=0 Feb 02 11:22:23 crc kubenswrapper[4782]: I0202 11:22:23.488883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerDied","Data":"c972dc47d6157e350c05293015a8979dd3252340d3b9b01b92f1e1cd1f5ff0df"} Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.901129 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941761 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941832 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941899 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941951 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941994 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.942026 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.942065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.956325 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph" (OuterVolumeSpecName: "ceph") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.956382 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.963303 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm" (OuterVolumeSpecName: "kube-api-access-wfhcm") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "kube-api-access-wfhcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.968171 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.970236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.970552 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory" (OuterVolumeSpecName: "inventory") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.977983 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044860 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044904 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044916 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044926 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044937 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044949 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044958 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.504304 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerDied","Data":"5b4f07ff15cbaace5049bb3eff108520caa8f61961e415e66aa5a801b9e5887e"} Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.504346 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b4f07ff15cbaace5049bb3eff108520caa8f61961e415e66aa5a801b9e5887e" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.504408 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.625232 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj"] Feb 02 11:22:25 crc kubenswrapper[4782]: E0202 11:22:25.625632 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6849945-28f4-4218-97c1-6047c2d0c368" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.625679 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6849945-28f4-4218-97c1-6047c2d0c368" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.625896 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6849945-28f4-4218-97c1-6047c2d0c368" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.626571 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629598 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629814 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629905 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629907 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.632090 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.633473 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.640030 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj"] Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.657935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658349 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658555 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658697 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760024 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760282 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760299 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.765184 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.765206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.765531 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.766972 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.772099 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.782527 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.944809 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:26 crc kubenswrapper[4782]: I0202 11:22:26.465438 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj"] Feb 02 11:22:26 crc kubenswrapper[4782]: W0202 11:22:26.470825 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b66a766_dc87_45dd_a611_d9a30c3f327e.slice/crio-8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821 WatchSource:0}: Error finding container 8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821: Status 404 returned error can't find the container with id 8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821 Feb 02 11:22:26 crc kubenswrapper[4782]: I0202 11:22:26.515979 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerStarted","Data":"8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821"} Feb 02 11:22:27 crc kubenswrapper[4782]: I0202 11:22:27.527759 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerStarted","Data":"4941491cf1df1bc7280b824efa5aa4ba9575dbca5ae4407e9126b0211ca2c981"} Feb 02 11:22:27 crc kubenswrapper[4782]: I0202 11:22:27.550556 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" podStartSLOduration=2.042507718 podStartE2EDuration="2.55053748s" podCreationTimestamp="2026-02-02 11:22:25 +0000 UTC" firstStartedPulling="2026-02-02 11:22:26.474130329 +0000 UTC m=+2626.358323045" lastFinishedPulling="2026-02-02 11:22:26.982160091 +0000 UTC m=+2626.866352807" observedRunningTime="2026-02-02 11:22:27.546526945 +0000 UTC m=+2627.430719671" watchObservedRunningTime="2026-02-02 11:22:27.55053748 +0000 UTC m=+2627.434730196" Feb 02 11:22:31 crc kubenswrapper[4782]: I0202 11:22:31.822701 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:31 crc kubenswrapper[4782]: E0202 11:22:31.823906 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:45 crc kubenswrapper[4782]: I0202 11:22:45.820946 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:45 crc kubenswrapper[4782]: E0202 11:22:45.821885 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:59 crc kubenswrapper[4782]: I0202 11:22:59.821326 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:59 crc kubenswrapper[4782]: E0202 11:22:59.822088 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:13 crc kubenswrapper[4782]: I0202 11:23:13.821819 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:13 crc kubenswrapper[4782]: E0202 11:23:13.822827 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:27 crc kubenswrapper[4782]: I0202 11:23:27.821962 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:27 crc kubenswrapper[4782]: E0202 11:23:27.823040 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:38 crc kubenswrapper[4782]: I0202 11:23:38.822856 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:38 crc kubenswrapper[4782]: E0202 11:23:38.823685 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:50 crc kubenswrapper[4782]: I0202 11:23:50.827971 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:50 crc kubenswrapper[4782]: E0202 11:23:50.828784 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:24:05 crc kubenswrapper[4782]: I0202 11:24:05.821836 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:24:06 crc kubenswrapper[4782]: I0202 11:24:06.332277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7"} Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.624144 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.647146 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.672621 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.730708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.731270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.731617 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.833594 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.833662 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.833724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.834453 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.835132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.861239 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.977494 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:19 crc kubenswrapper[4782]: I0202 11:25:19.663940 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:19 crc kubenswrapper[4782]: I0202 11:25:19.920206 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerStarted","Data":"a9280494df3ae0214f5fa0eaabf1e19bb7063ddf8696aadacc84bd731eb37e75"} Feb 02 11:25:20 crc kubenswrapper[4782]: I0202 11:25:20.931075 4782 generic.go:334] "Generic (PLEG): container finished" podID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" exitCode=0 Feb 02 11:25:20 crc kubenswrapper[4782]: I0202 11:25:20.931350 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b"} Feb 02 11:25:20 crc kubenswrapper[4782]: I0202 11:25:20.933833 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:25:22 crc kubenswrapper[4782]: I0202 11:25:22.949874 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerStarted","Data":"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe"} Feb 02 11:25:27 crc kubenswrapper[4782]: I0202 11:25:27.995143 4782 generic.go:334] "Generic (PLEG): container finished" podID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" exitCode=0 Feb 02 11:25:27 crc kubenswrapper[4782]: I0202 11:25:27.995559 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe"} Feb 02 11:25:29 crc kubenswrapper[4782]: I0202 11:25:29.009787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerStarted","Data":"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e"} Feb 02 11:25:29 crc kubenswrapper[4782]: I0202 11:25:29.032225 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gtqf4" podStartSLOduration=3.255243515 podStartE2EDuration="11.032204447s" podCreationTimestamp="2026-02-02 11:25:18 +0000 UTC" firstStartedPulling="2026-02-02 11:25:20.933608457 +0000 UTC m=+2800.817801173" lastFinishedPulling="2026-02-02 11:25:28.710569389 +0000 UTC m=+2808.594762105" observedRunningTime="2026-02-02 11:25:29.0292086 +0000 UTC m=+2808.913401316" watchObservedRunningTime="2026-02-02 11:25:29.032204447 +0000 UTC m=+2808.916397163" Feb 02 11:25:38 crc kubenswrapper[4782]: I0202 11:25:38.978524 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:38 crc kubenswrapper[4782]: I0202 11:25:38.978967 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:39 crc kubenswrapper[4782]: I0202 11:25:39.024390 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:39 crc kubenswrapper[4782]: I0202 11:25:39.145260 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:39 crc kubenswrapper[4782]: I0202 11:25:39.261148 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.111154 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gtqf4" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" containerID="cri-o://34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" gracePeriod=2 Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.619768 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.712533 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"ade4aa13-eb8f-45b6-930c-278af990ff9f\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.712633 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"ade4aa13-eb8f-45b6-930c-278af990ff9f\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.712703 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"ade4aa13-eb8f-45b6-930c-278af990ff9f\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.713498 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities" (OuterVolumeSpecName: "utilities") pod "ade4aa13-eb8f-45b6-930c-278af990ff9f" (UID: "ade4aa13-eb8f-45b6-930c-278af990ff9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.720855 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2" (OuterVolumeSpecName: "kube-api-access-85br2") pod "ade4aa13-eb8f-45b6-930c-278af990ff9f" (UID: "ade4aa13-eb8f-45b6-930c-278af990ff9f"). InnerVolumeSpecName "kube-api-access-85br2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.814367 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") on node \"crc\" DevicePath \"\"" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.814399 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.834131 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ade4aa13-eb8f-45b6-930c-278af990ff9f" (UID: "ade4aa13-eb8f-45b6-930c-278af990ff9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.916451 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.122876 4782 generic.go:334] "Generic (PLEG): container finished" podID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" exitCode=0 Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.122983 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e"} Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.123079 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.123307 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"a9280494df3ae0214f5fa0eaabf1e19bb7063ddf8696aadacc84bd731eb37e75"} Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.123336 4782 scope.go:117] "RemoveContainer" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.149316 4782 scope.go:117] "RemoveContainer" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.180337 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.203302 4782 scope.go:117] "RemoveContainer" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.223453 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.254296 4782 scope.go:117] "RemoveContainer" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" Feb 02 11:25:42 crc kubenswrapper[4782]: E0202 11:25:42.254754 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e\": container with ID starting with 34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e not found: ID does not exist" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.254794 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e"} err="failed to get container status \"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e\": rpc error: code = NotFound desc = could not find container \"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e\": container with ID starting with 34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e not found: ID does not exist" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.254822 4782 scope.go:117] "RemoveContainer" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" Feb 02 11:25:42 crc kubenswrapper[4782]: E0202 11:25:42.255089 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe\": container with ID starting with b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe not found: ID does not exist" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.255115 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe"} err="failed to get container status \"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe\": rpc error: code = NotFound desc = could not find container \"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe\": container with ID starting with b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe not found: ID does not exist" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.255134 4782 scope.go:117] "RemoveContainer" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" Feb 02 11:25:42 crc kubenswrapper[4782]: E0202 11:25:42.255411 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b\": container with ID starting with e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b not found: ID does not exist" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.255439 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b"} err="failed to get container status \"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b\": rpc error: code = NotFound desc = could not find container \"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b\": container with ID starting with e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b not found: ID does not exist" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.831944 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" path="/var/lib/kubelet/pods/ade4aa13-eb8f-45b6-930c-278af990ff9f/volumes" Feb 02 11:26:22 crc kubenswrapper[4782]: I0202 11:26:22.950957 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:26:22 crc kubenswrapper[4782]: I0202 11:26:22.951553 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.180433 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:33 crc kubenswrapper[4782]: E0202 11:26:33.181226 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-content" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181240 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-content" Feb 02 11:26:33 crc kubenswrapper[4782]: E0202 11:26:33.181276 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-utilities" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181283 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-utilities" Feb 02 11:26:33 crc kubenswrapper[4782]: E0202 11:26:33.181306 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181313 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181499 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.182688 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.198037 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.217550 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.217608 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.217692 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319325 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319361 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319876 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319940 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.340553 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.511748 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.133597 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.554093 4782 generic.go:334] "Generic (PLEG): container finished" podID="42224916-385d-4dd6-96c5-3e4080fac20e" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" exitCode=0 Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.554153 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1"} Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.554187 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerStarted","Data":"cee203ab926e18b0e2174d06f18657c8b61e9bf7093328be556e238242433733"} Feb 02 11:26:35 crc kubenswrapper[4782]: I0202 11:26:35.564826 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerStarted","Data":"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d"} Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.577470 4782 generic.go:334] "Generic (PLEG): container finished" podID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerID="4941491cf1df1bc7280b824efa5aa4ba9575dbca5ae4407e9126b0211ca2c981" exitCode=0 Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.577552 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerDied","Data":"4941491cf1df1bc7280b824efa5aa4ba9575dbca5ae4407e9126b0211ca2c981"} Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.583981 4782 generic.go:334] "Generic (PLEG): container finished" podID="42224916-385d-4dd6-96c5-3e4080fac20e" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" exitCode=0 Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.584031 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d"} Feb 02 11:26:37 crc kubenswrapper[4782]: I0202 11:26:37.594823 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerStarted","Data":"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777"} Feb 02 11:26:37 crc kubenswrapper[4782]: I0202 11:26:37.616828 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z42rt" podStartSLOduration=2.185732068 podStartE2EDuration="4.616809827s" podCreationTimestamp="2026-02-02 11:26:33 +0000 UTC" firstStartedPulling="2026-02-02 11:26:34.555609242 +0000 UTC m=+2874.439801958" lastFinishedPulling="2026-02-02 11:26:36.986687001 +0000 UTC m=+2876.870879717" observedRunningTime="2026-02-02 11:26:37.615012675 +0000 UTC m=+2877.499205391" watchObservedRunningTime="2026-02-02 11:26:37.616809827 +0000 UTC m=+2877.501002543" Feb 02 11:26:37 crc kubenswrapper[4782]: I0202 11:26:37.952682 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110423 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110493 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110524 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110699 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110745 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.116352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.116558 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph" (OuterVolumeSpecName: "ceph") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.123974 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv" (OuterVolumeSpecName: "kube-api-access-5b7cv") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "kube-api-access-5b7cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.137350 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.140434 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory" (OuterVolumeSpecName: "inventory") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.141329 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213211 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213247 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213258 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213267 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213278 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213286 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.603519 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.604060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerDied","Data":"8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821"} Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.604090 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.748508 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp"] Feb 02 11:26:38 crc kubenswrapper[4782]: E0202 11:26:38.749164 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.749253 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.749491 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.750147 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.755963 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.756434 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.757627 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.759257 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.759598 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.759852 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.760006 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.760142 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.766797 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.805561 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp"] Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836286 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836352 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836416 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836482 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836520 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836548 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836588 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836617 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836718 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836784 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938364 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938473 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938491 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938566 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938591 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938613 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.939409 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.942049 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.942704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.943140 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.943492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.944164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.948211 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.948330 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.948210 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.955122 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.958190 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:39 crc kubenswrapper[4782]: I0202 11:26:39.067175 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:39 crc kubenswrapper[4782]: I0202 11:26:39.578927 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp"] Feb 02 11:26:39 crc kubenswrapper[4782]: I0202 11:26:39.612510 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerStarted","Data":"90ee7ea00d96ed126531097af68cbbe31ad44d060c945efbf3734476778e8d22"} Feb 02 11:26:40 crc kubenswrapper[4782]: I0202 11:26:40.622016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerStarted","Data":"219026f19d82228ad62538b8507aa6156d86888ca3dd2d1b1e3d2da088041c78"} Feb 02 11:26:40 crc kubenswrapper[4782]: I0202 11:26:40.644279 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" podStartSLOduration=2.252804964 podStartE2EDuration="2.644258551s" podCreationTimestamp="2026-02-02 11:26:38 +0000 UTC" firstStartedPulling="2026-02-02 11:26:39.587519266 +0000 UTC m=+2879.471711982" lastFinishedPulling="2026-02-02 11:26:39.978972853 +0000 UTC m=+2879.863165569" observedRunningTime="2026-02-02 11:26:40.636031834 +0000 UTC m=+2880.520224540" watchObservedRunningTime="2026-02-02 11:26:40.644258551 +0000 UTC m=+2880.528451267" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.512703 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.513436 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.562953 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.689904 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.798982 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:45 crc kubenswrapper[4782]: I0202 11:26:45.656324 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z42rt" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" containerID="cri-o://75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" gracePeriod=2 Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.102750 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.268810 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"42224916-385d-4dd6-96c5-3e4080fac20e\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.269551 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"42224916-385d-4dd6-96c5-3e4080fac20e\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.269819 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities" (OuterVolumeSpecName: "utilities") pod "42224916-385d-4dd6-96c5-3e4080fac20e" (UID: "42224916-385d-4dd6-96c5-3e4080fac20e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.270014 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"42224916-385d-4dd6-96c5-3e4080fac20e\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.270521 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.275957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8" (OuterVolumeSpecName: "kube-api-access-5mfc8") pod "42224916-385d-4dd6-96c5-3e4080fac20e" (UID: "42224916-385d-4dd6-96c5-3e4080fac20e"). InnerVolumeSpecName "kube-api-access-5mfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.373475 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.523934 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42224916-385d-4dd6-96c5-3e4080fac20e" (UID: "42224916-385d-4dd6-96c5-3e4080fac20e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.576870 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665722 4782 generic.go:334] "Generic (PLEG): container finished" podID="42224916-385d-4dd6-96c5-3e4080fac20e" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" exitCode=0 Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665763 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777"} Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665783 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"cee203ab926e18b0e2174d06f18657c8b61e9bf7093328be556e238242433733"} Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665834 4782 scope.go:117] "RemoveContainer" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.711516 4782 scope.go:117] "RemoveContainer" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.720039 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.740169 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.762939 4782 scope.go:117] "RemoveContainer" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.812599 4782 scope.go:117] "RemoveContainer" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" Feb 02 11:26:46 crc kubenswrapper[4782]: E0202 11:26:46.813000 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777\": container with ID starting with 75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777 not found: ID does not exist" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813041 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777"} err="failed to get container status \"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777\": rpc error: code = NotFound desc = could not find container \"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777\": container with ID starting with 75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777 not found: ID does not exist" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813068 4782 scope.go:117] "RemoveContainer" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" Feb 02 11:26:46 crc kubenswrapper[4782]: E0202 11:26:46.813281 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d\": container with ID starting with 58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d not found: ID does not exist" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813309 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d"} err="failed to get container status \"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d\": rpc error: code = NotFound desc = could not find container \"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d\": container with ID starting with 58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d not found: ID does not exist" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813328 4782 scope.go:117] "RemoveContainer" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" Feb 02 11:26:46 crc kubenswrapper[4782]: E0202 11:26:46.813839 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1\": container with ID starting with 426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1 not found: ID does not exist" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.815044 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1"} err="failed to get container status \"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1\": rpc error: code = NotFound desc = could not find container \"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1\": container with ID starting with 426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1 not found: ID does not exist" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.834406 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" path="/var/lib/kubelet/pods/42224916-385d-4dd6-96c5-3e4080fac20e/volumes" Feb 02 11:26:52 crc kubenswrapper[4782]: I0202 11:26:52.951393 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:26:52 crc kubenswrapper[4782]: I0202 11:26:52.951999 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.952190 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.952994 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.953047 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.954221 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.954277 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7" gracePeriod=600 Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.975409 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7" exitCode=0 Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.975494 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7"} Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.976082 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2"} Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.976113 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.866669 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:27:57 crc kubenswrapper[4782]: E0202 11:27:57.869599 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869633 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" Feb 02 11:27:57 crc kubenswrapper[4782]: E0202 11:27:57.869698 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-utilities" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869709 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-utilities" Feb 02 11:27:57 crc kubenswrapper[4782]: E0202 11:27:57.869744 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-content" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869752 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-content" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869996 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.871464 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.893462 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.962906 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.962980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.963279 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065418 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.066279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.088843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.193213 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.931221 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:27:59 crc kubenswrapper[4782]: I0202 11:27:59.313912 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerStarted","Data":"3cf5ab6f19cc2e3fa21dd2c3eeeed9e8b46dc188b00ed0b78a3f1058f84aa0d1"} Feb 02 11:28:00 crc kubenswrapper[4782]: I0202 11:28:00.323346 4782 generic.go:334] "Generic (PLEG): container finished" podID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" exitCode=0 Feb 02 11:28:00 crc kubenswrapper[4782]: I0202 11:28:00.323402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835"} Feb 02 11:28:02 crc kubenswrapper[4782]: I0202 11:28:02.338090 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerStarted","Data":"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c"} Feb 02 11:28:06 crc kubenswrapper[4782]: I0202 11:28:06.391733 4782 generic.go:334] "Generic (PLEG): container finished" podID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" exitCode=0 Feb 02 11:28:06 crc kubenswrapper[4782]: I0202 11:28:06.391920 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c"} Feb 02 11:28:08 crc kubenswrapper[4782]: I0202 11:28:08.416039 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerStarted","Data":"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad"} Feb 02 11:28:08 crc kubenswrapper[4782]: I0202 11:28:08.448575 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8n7qq" podStartSLOduration=4.637970459 podStartE2EDuration="11.448554852s" podCreationTimestamp="2026-02-02 11:27:57 +0000 UTC" firstStartedPulling="2026-02-02 11:28:00.325516069 +0000 UTC m=+2960.209708785" lastFinishedPulling="2026-02-02 11:28:07.136100462 +0000 UTC m=+2967.020293178" observedRunningTime="2026-02-02 11:28:08.442186469 +0000 UTC m=+2968.326379195" watchObservedRunningTime="2026-02-02 11:28:08.448554852 +0000 UTC m=+2968.332747568" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.194755 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.195199 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.241493 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.535088 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.588265 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:28:20 crc kubenswrapper[4782]: I0202 11:28:20.524955 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8n7qq" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" containerID="cri-o://1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" gracePeriod=2 Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.013804 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.124708 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.125098 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.125447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.130003 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities" (OuterVolumeSpecName: "utilities") pod "9fdc37e6-68ac-49ab-9c4c-d72c777a3002" (UID: "9fdc37e6-68ac-49ab-9c4c-d72c777a3002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.134513 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9" (OuterVolumeSpecName: "kube-api-access-5zbp9") pod "9fdc37e6-68ac-49ab-9c4c-d72c777a3002" (UID: "9fdc37e6-68ac-49ab-9c4c-d72c777a3002"). InnerVolumeSpecName "kube-api-access-5zbp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.180617 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fdc37e6-68ac-49ab-9c4c-d72c777a3002" (UID: "9fdc37e6-68ac-49ab-9c4c-d72c777a3002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.228163 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.228208 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.228220 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551169 4782 generic.go:334] "Generic (PLEG): container finished" podID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" exitCode=0 Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad"} Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551246 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"3cf5ab6f19cc2e3fa21dd2c3eeeed9e8b46dc188b00ed0b78a3f1058f84aa0d1"} Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551264 4782 scope.go:117] "RemoveContainer" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551307 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.574585 4782 scope.go:117] "RemoveContainer" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.590609 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.598569 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.604095 4782 scope.go:117] "RemoveContainer" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.650179 4782 scope.go:117] "RemoveContainer" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" Feb 02 11:28:21 crc kubenswrapper[4782]: E0202 11:28:21.650851 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad\": container with ID starting with 1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad not found: ID does not exist" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.650898 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad"} err="failed to get container status \"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad\": rpc error: code = NotFound desc = could not find container \"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad\": container with ID starting with 1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad not found: ID does not exist" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.650929 4782 scope.go:117] "RemoveContainer" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" Feb 02 11:28:21 crc kubenswrapper[4782]: E0202 11:28:21.651409 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c\": container with ID starting with a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c not found: ID does not exist" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.651517 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c"} err="failed to get container status \"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c\": rpc error: code = NotFound desc = could not find container \"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c\": container with ID starting with a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c not found: ID does not exist" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.651624 4782 scope.go:117] "RemoveContainer" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" Feb 02 11:28:21 crc kubenswrapper[4782]: E0202 11:28:21.652246 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835\": container with ID starting with d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835 not found: ID does not exist" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.652280 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835"} err="failed to get container status \"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835\": rpc error: code = NotFound desc = could not find container \"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835\": container with ID starting with d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835 not found: ID does not exist" Feb 02 11:28:22 crc kubenswrapper[4782]: I0202 11:28:22.832549 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" path="/var/lib/kubelet/pods/9fdc37e6-68ac-49ab-9c4c-d72c777a3002/volumes" Feb 02 11:29:26 crc kubenswrapper[4782]: I0202 11:29:26.100722 4782 generic.go:334] "Generic (PLEG): container finished" podID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerID="219026f19d82228ad62538b8507aa6156d86888ca3dd2d1b1e3d2da088041c78" exitCode=0 Feb 02 11:29:26 crc kubenswrapper[4782]: I0202 11:29:26.100852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerDied","Data":"219026f19d82228ad62538b8507aa6156d86888ca3dd2d1b1e3d2da088041c78"} Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.767247 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.883826 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.883875 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.883909 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884631 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884689 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884716 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884769 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884798 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884824 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.891990 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph" (OuterVolumeSpecName: "ceph") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.893145 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc" (OuterVolumeSpecName: "kube-api-access-rvnkc") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "kube-api-access-rvnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.896743 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.926129 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory" (OuterVolumeSpecName: "inventory") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.930987 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.935731 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.942178 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.955849 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.959986 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.980152 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.982873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004165 4782 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004210 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004223 4782 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004245 4782 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004276 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004287 4782 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004300 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004310 4782 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004321 4782 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004331 4782 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004345 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.124207 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerDied","Data":"90ee7ea00d96ed126531097af68cbbe31ad44d060c945efbf3734476778e8d22"} Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.124269 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ee7ea00d96ed126531097af68cbbe31ad44d060c945efbf3734476778e8d22" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.124298 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.158456 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159530 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159550 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159564 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-content" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159572 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-content" Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159604 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159612 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159625 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-utilities" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159633 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-utilities" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159898 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159915 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.190218 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.190348 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.193359 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.194182 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.199458 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.201629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.204996 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.236036 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.394980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395040 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395112 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-scripts\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395143 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395178 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395271 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntp9\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-kube-api-access-lntp9\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395350 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395366 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-ceph\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395410 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395443 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395472 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395513 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395536 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395601 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-lib-modules\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395668 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395709 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395748 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5l5g\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-kube-api-access-p5l5g\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395822 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395840 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-dev\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395886 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395909 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395950 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395981 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396012 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-run\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396069 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396119 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-sys\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497352 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497404 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497687 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497801 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-lib-modules\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497872 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497912 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497940 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497974 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5l5g\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-kube-api-access-p5l5g\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498022 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-dev\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498071 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498097 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498149 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498194 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-run\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498253 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498280 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-sys\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498356 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498373 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498414 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-dev\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498436 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498732 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499272 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-run\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499345 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499406 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499434 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-scripts\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499471 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lntp9\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-kube-api-access-lntp9\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499558 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-ceph\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499677 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-lib-modules\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499713 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499747 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.502134 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.503715 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504728 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504784 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504812 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-sys\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504847 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505246 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-ceph\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505900 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.507033 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.507573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.508407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.508614 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.509965 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-scripts\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.510462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.525848 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.544247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5l5g\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-kube-api-access-p5l5g\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.553558 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lntp9\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-kube-api-access-lntp9\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.814765 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.824033 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.916861 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.918617 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.933603 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.012707 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.012758 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.041409 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.042590 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.050191 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.072273 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.074203 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.077383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.084229 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.084550 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.085321 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-57vkh" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.100794 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.114912 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.114962 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.115770 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.119856 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.173491 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.178134 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.179657 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.186309 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.186497 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.213606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.216850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.216928 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.216998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217039 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217098 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217118 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217171 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217190 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217243 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217329 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.276035 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.312821 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.314348 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323098 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323187 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323233 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323280 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323329 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323355 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323379 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323414 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323434 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323590 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324106 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324707 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324747 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324768 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324855 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.350927 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.355346 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.355770 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-v8dpd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.357578 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.358061 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.363343 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.363722 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.364403 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.367911 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.367989 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.379025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.387298 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.419585 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.441246 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489527 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489601 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489655 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489661 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489677 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.490195 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.490594 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.437620 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.491445 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.492367 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.492739 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.492954 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.505148 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.509912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.510003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.510100 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.510211 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.530064 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.545665 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.563286 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.583813 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: E0202 11:29:45.585082 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance kube-api-access-l8rxx], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="7a649cbf-74c3-4519-a14f-92815ec8a297" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.592472 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.608660 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.612972 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613088 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613129 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613190 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.626103 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.652777 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.654903 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.688367 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.696909 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.697818 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715625 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715709 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715972 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716016 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.717636 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.717912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.718359 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.734141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.743575 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.744265 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.820929 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821018 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821108 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821138 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.822288 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.823735 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.824236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.828101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.847362 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.994244 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.002429 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.017167 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:46 crc kubenswrapper[4782]: W0202 11:29:46.076968 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2b6a5bd_a9ae_4bc9_91ed_ca1ac5d7489a.slice/crio-309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49 WatchSource:0}: Error finding container 309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49: Status 404 returned error can't find the container with id 309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49 Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.109350 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.267531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.312321 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7","Type":"ContainerStarted","Data":"415b4b7bd0fc7798222630c7a319d125d0215742edd5f60525165470c16cbae1"} Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.313906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a","Type":"ContainerStarted","Data":"309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49"} Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.313929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.359175 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.437975 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438087 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438123 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438155 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438345 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438373 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438413 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438488 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.440671 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs" (OuterVolumeSpecName: "logs") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.440688 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.446356 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.448526 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts" (OuterVolumeSpecName: "scripts") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.449320 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx" (OuterVolumeSpecName: "kube-api-access-l8rxx") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "kube-api-access-l8rxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.452831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.454269 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.455499 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph" (OuterVolumeSpecName: "ceph") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.455824 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data" (OuterVolumeSpecName: "config-data") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.540654 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541628 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541691 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541793 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541812 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541824 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541837 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541848 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541858 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541869 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.618432 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.643836 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: W0202 11:29:46.987801 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda41f7244_284a_4ffc_9243_1b6748d57f86.slice/crio-2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383 WatchSource:0}: Error finding container 2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383: Status 404 returned error can't find the container with id 2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383 Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.018300 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.165791 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:47 crc kubenswrapper[4782]: W0202 11:29:47.301104 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00eb57a6_b941_443f_9b8a_644c0389b562.slice/crio-e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd WatchSource:0}: Error finding container e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd: Status 404 returned error can't find the container with id e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.330843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerStarted","Data":"e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.331996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerStarted","Data":"2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.343106 4782 generic.go:334] "Generic (PLEG): container finished" podID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerID="18235f2d52d1acb53abcc5d69239ea08135f49af22250cec7c915e6b6af27b05" exitCode=0 Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.343818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-61e9-account-create-update-vjlvv" event={"ID":"7260512c-a397-4b18-ab4d-a97e7dbf50d9","Type":"ContainerDied","Data":"18235f2d52d1acb53abcc5d69239ea08135f49af22250cec7c915e6b6af27b05"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.343855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-61e9-account-create-update-vjlvv" event={"ID":"7260512c-a397-4b18-ab4d-a97e7dbf50d9","Type":"ContainerStarted","Data":"c7e76ee0f8108c2e88436d6fb1e53203db1db23de055ae76bb7a9b8f1dc59596"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.352936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerStarted","Data":"78a7fb858e48a2d7c1668bcef174fb8172e784df949e22963608e184d25f8fba"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.363998 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerID="22ee95619a6ae6669166c5388f7644833a8f50918632409a8660abf992c5c5da" exitCode=0 Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.364092 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.364624 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-88lt6" event={"ID":"d9a2fa32-7949-4dbe-8e51-49627e08f051","Type":"ContainerDied","Data":"22ee95619a6ae6669166c5388f7644833a8f50918632409a8660abf992c5c5da"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.364680 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-88lt6" event={"ID":"d9a2fa32-7949-4dbe-8e51-49627e08f051","Type":"ContainerStarted","Data":"c7b6f3804e585f1815aa2f7a3dba4f157933b0f2077449b05093c8d992d9ba41"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.639505 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.675741 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.688569 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.690204 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.698056 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.700816 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.700996 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798786 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798899 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798958 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798996 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.799017 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.799047 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.799072 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.900921 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901287 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901434 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901465 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901538 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901563 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901785 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.903940 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.904279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.904791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.915247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.915907 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.917742 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.918104 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.936437 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.947250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.980604 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.106120 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.260457 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.285212 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.287106 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.291938 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323289 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323455 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323495 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323533 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323565 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.375572 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.397278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerStarted","Data":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.420627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a","Type":"ContainerStarted","Data":"291036b9d7a925a04fafa224e3c653b902ccf0c28788dd6267314c7c09b26095"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.420697 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a","Type":"ContainerStarted","Data":"2a7a53b367855ba1b0419635d2604ea82790291a09d021a941102e78627a9e21"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425066 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425136 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425405 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.429703 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.431905 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.432222 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.452781 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.453470 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.479383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.491713 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.494926 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.224472362 podStartE2EDuration="4.494904245s" podCreationTimestamp="2026-02-02 11:29:44 +0000 UTC" firstStartedPulling="2026-02-02 11:29:46.109827015 +0000 UTC m=+3065.994019731" lastFinishedPulling="2026-02-02 11:29:47.380258898 +0000 UTC m=+3067.264451614" observedRunningTime="2026-02-02 11:29:48.460590679 +0000 UTC m=+3068.344783415" watchObservedRunningTime="2026-02-02 11:29:48.494904245 +0000 UTC m=+3068.379096961" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.507506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.520766 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7","Type":"ContainerStarted","Data":"887806dc03ca175efb88ba8d7004dc8bd9d13d65dcdac85ca4654bde6853e624"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.521287 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7","Type":"ContainerStarted","Data":"5322598c674dda25fa507428fc9cc7c4897c935564e4f76434efb13a262333db"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.579735 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5665456548-9x6qh"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.581258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.614700 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5665456548-9x6qh"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.626175 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.629251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.397869664 podStartE2EDuration="4.629231524s" podCreationTimestamp="2026-02-02 11:29:44 +0000 UTC" firstStartedPulling="2026-02-02 11:29:45.790955273 +0000 UTC m=+3065.675147989" lastFinishedPulling="2026-02-02 11:29:47.022317133 +0000 UTC m=+3066.906509849" observedRunningTime="2026-02-02 11:29:48.575404417 +0000 UTC m=+3068.459597163" watchObservedRunningTime="2026-02-02 11:29:48.629231524 +0000 UTC m=+3068.513424240" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.644970 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dhck\" (UniqueName: \"kubernetes.io/projected/306e30f3-8fe7-427e-b8ff-309a561dda88-kube-api-access-7dhck\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-secret-key\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645095 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-scripts\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645155 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-tls-certs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-combined-ca-bundle\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-config-data\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306e30f3-8fe7-427e-b8ff-309a561dda88-logs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747685 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306e30f3-8fe7-427e-b8ff-309a561dda88-logs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747794 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dhck\" (UniqueName: \"kubernetes.io/projected/306e30f3-8fe7-427e-b8ff-309a561dda88-kube-api-access-7dhck\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747825 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-secret-key\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747890 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-scripts\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747918 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-tls-certs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747963 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-combined-ca-bundle\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-config-data\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.749149 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-config-data\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.749398 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306e30f3-8fe7-427e-b8ff-309a561dda88-logs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.751559 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.762416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-scripts\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.771630 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-tls-certs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.773351 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dhck\" (UniqueName: \"kubernetes.io/projected/306e30f3-8fe7-427e-b8ff-309a561dda88-kube-api-access-7dhck\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.784247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-secret-key\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.785769 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-combined-ca-bundle\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.858063 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a649cbf-74c3-4519-a14f-92815ec8a297" path="/var/lib/kubelet/pods/7a649cbf-74c3-4519-a14f-92815ec8a297/volumes" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.944900 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.342003 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.610781 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerStarted","Data":"75f9c9716f7604da4d575e0f1b2688df7bd2eada642f2f22b1f7f24cb9d5e5c4"} Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.803347 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.810274 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.816865 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.825565 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.935293 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.935883 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"d9a2fa32-7949-4dbe-8e51-49627e08f051\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.936341 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"d9a2fa32-7949-4dbe-8e51-49627e08f051\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.936726 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.944338 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7260512c-a397-4b18-ab4d-a97e7dbf50d9" (UID: "7260512c-a397-4b18-ab4d-a97e7dbf50d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.945698 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.953259 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9a2fa32-7949-4dbe-8e51-49627e08f051" (UID: "d9a2fa32-7949-4dbe-8e51-49627e08f051"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.957066 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw" (OuterVolumeSpecName: "kube-api-access-hjldw") pod "d9a2fa32-7949-4dbe-8e51-49627e08f051" (UID: "d9a2fa32-7949-4dbe-8e51-49627e08f051"). InnerVolumeSpecName "kube-api-access-hjldw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.964826 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm" (OuterVolumeSpecName: "kube-api-access-rndvm") pod "7260512c-a397-4b18-ab4d-a97e7dbf50d9" (UID: "7260512c-a397-4b18-ab4d-a97e7dbf50d9"). InnerVolumeSpecName "kube-api-access-rndvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.048845 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.048870 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.048881 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.059977 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.085771 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5665456548-9x6qh"] Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.635833 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"d24d9db9a798247d6fbcd136dc3f9d15a710d6aee5946c313b4ac9b4fb5bc96d"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.644834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-88lt6" event={"ID":"d9a2fa32-7949-4dbe-8e51-49627e08f051","Type":"ContainerDied","Data":"c7b6f3804e585f1815aa2f7a3dba4f157933b0f2077449b05093c8d992d9ba41"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.644933 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b6f3804e585f1815aa2f7a3dba4f157933b0f2077449b05093c8d992d9ba41" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.645009 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.651227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"f4ea7ab54d316a4c5aa410fefb5964e41975add1b7b4b663fd3e5e10d2e6f010"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.665864 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerStarted","Data":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.666801 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" containerID="cri-o://c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" gracePeriod=30 Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.666821 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" containerID="cri-o://edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" gracePeriod=30 Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.676907 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-61e9-account-create-update-vjlvv" event={"ID":"7260512c-a397-4b18-ab4d-a97e7dbf50d9","Type":"ContainerDied","Data":"c7e76ee0f8108c2e88436d6fb1e53203db1db23de055ae76bb7a9b8f1dc59596"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.677023 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e76ee0f8108c2e88436d6fb1e53203db1db23de055ae76bb7a9b8f1dc59596" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.677251 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.697802 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.697780638 podStartE2EDuration="7.697780638s" podCreationTimestamp="2026-02-02 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:50.694589656 +0000 UTC m=+3070.578782372" watchObservedRunningTime="2026-02-02 11:29:50.697780638 +0000 UTC m=+3070.581973354" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.690107 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.699570 4782 generic.go:334] "Generic (PLEG): container finished" podID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" exitCode=143 Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.699600 4782 generic.go:334] "Generic (PLEG): container finished" podID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" exitCode=143 Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.699630 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerDied","Data":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700005 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerDied","Data":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700023 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerDied","Data":"2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700040 4782 scope.go:117] "RemoveContainer" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700162 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.708247 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerStarted","Data":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795124 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795154 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795186 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795244 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795271 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795387 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795441 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795462 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795700 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.796092 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs" (OuterVolumeSpecName: "logs") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.796514 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.796537 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.800767 4782 scope.go:117] "RemoveContainer" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.812415 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph" (OuterVolumeSpecName: "ceph") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.814804 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts" (OuterVolumeSpecName: "scripts") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.817935 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.829438 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq" (OuterVolumeSpecName: "kube-api-access-5c7wq") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "kube-api-access-5c7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.852630 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898880 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898912 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898935 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898946 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898954 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.933784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data" (OuterVolumeSpecName: "config-data") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.945291 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.956104 4782 scope.go:117] "RemoveContainer" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: E0202 11:29:51.956967 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": container with ID starting with edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460 not found: ID does not exist" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.957001 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} err="failed to get container status \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": rpc error: code = NotFound desc = could not find container \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": container with ID starting with edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460 not found: ID does not exist" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.957031 4782 scope.go:117] "RemoveContainer" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: E0202 11:29:51.958083 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": container with ID starting with c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188 not found: ID does not exist" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958108 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} err="failed to get container status \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": rpc error: code = NotFound desc = could not find container \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": container with ID starting with c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188 not found: ID does not exist" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958120 4782 scope.go:117] "RemoveContainer" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958587 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} err="failed to get container status \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": rpc error: code = NotFound desc = could not find container \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": container with ID starting with edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460 not found: ID does not exist" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958631 4782 scope.go:117] "RemoveContainer" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.960351 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} err="failed to get container status \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": rpc error: code = NotFound desc = could not find container \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": container with ID starting with c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188 not found: ID does not exist" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.004367 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.004432 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.006341 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.106886 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.382226 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.409662 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.428615 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429074 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerName="mariadb-database-create" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429086 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerName="mariadb-database-create" Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429103 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerName="mariadb-account-create-update" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429110 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerName="mariadb-account-create-update" Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429131 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429145 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429167 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429174 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429341 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429360 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerName="mariadb-account-create-update" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429397 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerName="mariadb-database-create" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429412 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.430539 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.436440 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.436680 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.470119 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523759 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523846 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-logs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523950 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523986 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524008 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr9t\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-kube-api-access-plr9t\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627392 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627545 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627588 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr9t\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-kube-api-access-plr9t\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628419 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628556 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-logs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-logs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628606 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.637037 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.642480 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.646666 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.657376 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.667139 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.677301 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr9t\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-kube-api-access-plr9t\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.769901 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.816078 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerStarted","Data":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.816250 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" containerID="cri-o://4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" gracePeriod=30 Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.816881 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" containerID="cri-o://d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" gracePeriod=30 Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.862977 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" path="/var/lib/kubelet/pods/a41f7244-284a-4ffc-9243-1b6748d57f86/volumes" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.864797 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.86477074 podStartE2EDuration="5.86477074s" podCreationTimestamp="2026-02-02 11:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:52.853191587 +0000 UTC m=+3072.737384303" watchObservedRunningTime="2026-02-02 11:29:52.86477074 +0000 UTC m=+3072.748963466" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.951839 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.951896 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.075305 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.515755 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569376 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569591 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569732 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569760 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569851 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569874 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.570030 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.572072 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.590104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs" (OuterVolumeSpecName: "logs") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.611774 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts" (OuterVolumeSpecName: "scripts") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.616396 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.635851 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5" (OuterVolumeSpecName: "kube-api-access-thdm5") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "kube-api-access-thdm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.638829 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph" (OuterVolumeSpecName: "ceph") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678775 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678848 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678864 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678874 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678885 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678892 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.711272 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.734223 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.785515 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.785554 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.795998 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data" (OuterVolumeSpecName: "config-data") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.796025 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.887349 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.887387 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.904979 4782 generic.go:334] "Generic (PLEG): container finished" podID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" exitCode=143 Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905024 4782 generic.go:334] "Generic (PLEG): container finished" podID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" exitCode=143 Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905049 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerDied","Data":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905081 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerDied","Data":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905092 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerDied","Data":"75f9c9716f7604da4d575e0f1b2688df7bd2eada642f2f22b1f7f24cb9d5e5c4"} Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905118 4782 scope.go:117] "RemoveContainer" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905137 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.994097 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.009471 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.032755 4782 scope.go:117] "RemoveContainer" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.052680 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.053313 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.059790 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.059960 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.060058 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.060418 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.060514 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.061710 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.073246 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.073583 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.085146 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093258 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093304 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093614 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093946 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.094223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5bj\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-kube-api-access-7q5bj\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.094578 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.094632 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.133009 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.162600 4782 scope.go:117] "RemoveContainer" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.167989 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": container with ID starting with d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6 not found: ID does not exist" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.168256 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} err="failed to get container status \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": rpc error: code = NotFound desc = could not find container \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": container with ID starting with d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6 not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.168368 4782 scope.go:117] "RemoveContainer" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.176165 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": container with ID starting with 4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc not found: ID does not exist" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.176215 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} err="failed to get container status \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": rpc error: code = NotFound desc = could not find container \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": container with ID starting with 4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.176246 4782 scope.go:117] "RemoveContainer" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.179813 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} err="failed to get container status \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": rpc error: code = NotFound desc = could not find container \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": container with ID starting with d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6 not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.179863 4782 scope.go:117] "RemoveContainer" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.183892 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} err="failed to get container status \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": rpc error: code = NotFound desc = could not find container \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": container with ID starting with 4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: W0202 11:29:54.197200 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc86717_3e71_440c_a8f4_9cd4480e46d2.slice/crio-e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d WatchSource:0}: Error finding container e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d: Status 404 returned error can't find the container with id e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198290 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198383 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198411 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q5bj\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-kube-api-access-7q5bj\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.199208 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.206256 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.207471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.213232 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.213985 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.214032 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.214728 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.228000 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.304224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q5bj\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-kube-api-access-7q5bj\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.350444 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.398206 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.854462 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" path="/var/lib/kubelet/pods/8dc11dda-830a-4b93-b670-e3fabc7b9c28/volumes" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.941274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdc86717-3e71-440c-a8f4-9cd4480e46d2","Type":"ContainerStarted","Data":"e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d"} Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.318579 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.357365 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.398597 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="5d7df751-5d4d-4ce4-83c9-70abd18fc7c7" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 11:29:55 crc kubenswrapper[4782]: W0202 11:29:55.420303 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c11a274_b189_4a4e_9a21_1c1d8fcc7f13.slice/crio-21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432 WatchSource:0}: Error finding container 21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432: Status 404 returned error can't find the container with id 21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432 Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.663902 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.665328 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.668019 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tzzmn" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.668057 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.706997 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761321 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761358 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761430 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866215 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866753 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866789 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866822 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.873267 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.880082 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.887726 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.916708 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.956530 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdc86717-3e71-440c-a8f4-9cd4480e46d2","Type":"ContainerStarted","Data":"faa9ae2ad06f729c0fe48d5b0bbae14723b4ed65fa8e90818ea17da240b26437"} Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.961161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13","Type":"ContainerStarted","Data":"21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432"} Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.014030 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.863656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.976189 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13","Type":"ContainerStarted","Data":"be7673cfa97f9a174c60978e9bca29709d4620bcb9d36df6d57fadfa99e03fad"} Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.985326 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerStarted","Data":"3bc352515a4e7faf8ad0720cf509467a309a1738f43d6a8ab5058f36de1f92ac"} Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:57.998273 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdc86717-3e71-440c-a8f4-9cd4480e46d2","Type":"ContainerStarted","Data":"7dcfb599143095b271280ff259a627804e6af6bfdc17d343edf8986854ff4661"} Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:58.001257 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13","Type":"ContainerStarted","Data":"a38b6d76c352d48fa771945ab2479e55d78e15bb0853d036d00c722e84363632"} Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:58.021518 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.021498914 podStartE2EDuration="6.021498914s" podCreationTimestamp="2026-02-02 11:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:58.020053723 +0000 UTC m=+3077.904246469" watchObservedRunningTime="2026-02-02 11:29:58.021498914 +0000 UTC m=+3077.905691650" Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:58.116119 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.115746622 podStartE2EDuration="4.115746622s" podCreationTimestamp="2026-02-02 11:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:58.058266931 +0000 UTC m=+3077.942459667" watchObservedRunningTime="2026-02-02 11:29:58.115746622 +0000 UTC m=+3077.999939338" Feb 02 11:29:59 crc kubenswrapper[4782]: I0202 11:29:59.839165 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.151286 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7"] Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.152589 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.168706 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.168967 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.189411 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7"] Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.313567 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.313742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.313927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.416154 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.416340 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.416425 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.417698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.426391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.442838 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.510804 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.077895 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.078513 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.115061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.123526 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.070570 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.070701 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.398822 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.399869 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.435023 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.469550 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:05 crc kubenswrapper[4782]: I0202 11:30:05.081872 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:05 crc kubenswrapper[4782]: I0202 11:30:05.081912 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:07 crc kubenswrapper[4782]: I0202 11:30:07.109452 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:07 crc kubenswrapper[4782]: I0202 11:30:07.109949 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:08 crc kubenswrapper[4782]: I0202 11:30:08.983686 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:08 crc kubenswrapper[4782]: I0202 11:30:08.984227 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.004010 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.004158 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.061532 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.102394 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 11:30:10 crc kubenswrapper[4782]: E0202 11:30:10.953543 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Feb 02 11:30:10 crc kubenswrapper[4782]: E0202 11:30:10.954160 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4s4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-p6nkb_openstack(f45fc51f-4efe-4cbf-9539-d858ac3c2e73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:30:10 crc kubenswrapper[4782]: E0202 11:30:10.955322 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-p6nkb" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" Feb 02 11:30:11 crc kubenswrapper[4782]: E0202 11:30:11.179325 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-p6nkb" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" Feb 02 11:30:11 crc kubenswrapper[4782]: I0202 11:30:11.934322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7"] Feb 02 11:30:11 crc kubenswrapper[4782]: W0202 11:30:11.938794 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44a78b5b_c712_4b4a_a035_652aea7086d0.slice/crio-a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b WatchSource:0}: Error finding container a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b: Status 404 returned error can't find the container with id a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.197868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" event={"ID":"44a78b5b-c712-4b4a-a035-652aea7086d0","Type":"ContainerStarted","Data":"a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201092 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerStarted","Data":"c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201231 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerStarted","Data":"618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201564 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5545895985-nbz88" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" containerID="cri-o://618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201700 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5545895985-nbz88" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" containerID="cri-o://c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.222278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.222331 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"d7e359bc78356df469d48e5750e96b222300fd8fead2a75722bdc9db69969013"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.240247 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerStarted","Data":"9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.240289 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerStarted","Data":"c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.242117 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cc68dfb67-6l9rd" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" containerID="cri-o://c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.242705 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cc68dfb67-6l9rd" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" containerID="cri-o://9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.245947 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5545895985-nbz88" podStartSLOduration=3.543111719 podStartE2EDuration="27.245923283s" podCreationTimestamp="2026-02-02 11:29:45 +0000 UTC" firstStartedPulling="2026-02-02 11:29:47.306374155 +0000 UTC m=+3067.190566871" lastFinishedPulling="2026-02-02 11:30:11.009185729 +0000 UTC m=+3090.893378435" observedRunningTime="2026-02-02 11:30:12.231505279 +0000 UTC m=+3092.115698005" watchObservedRunningTime="2026-02-02 11:30:12.245923283 +0000 UTC m=+3092.130116009" Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.267855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.267907 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.289014 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5665456548-9x6qh" podStartSLOduration=3.362094385 podStartE2EDuration="24.288998271s" podCreationTimestamp="2026-02-02 11:29:48 +0000 UTC" firstStartedPulling="2026-02-02 11:29:50.082720325 +0000 UTC m=+3069.966913031" lastFinishedPulling="2026-02-02 11:30:11.009624201 +0000 UTC m=+3090.893816917" observedRunningTime="2026-02-02 11:30:12.27054226 +0000 UTC m=+3092.154734976" watchObservedRunningTime="2026-02-02 11:30:12.288998271 +0000 UTC m=+3092.173190987" Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.309271 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78d997b864-7sqws" podStartSLOduration=3.407990964 podStartE2EDuration="24.309252893s" podCreationTimestamp="2026-02-02 11:29:48 +0000 UTC" firstStartedPulling="2026-02-02 11:29:50.10792497 +0000 UTC m=+3069.992117696" lastFinishedPulling="2026-02-02 11:30:11.009186909 +0000 UTC m=+3090.893379625" observedRunningTime="2026-02-02 11:30:12.296700012 +0000 UTC m=+3092.180892728" watchObservedRunningTime="2026-02-02 11:30:12.309252893 +0000 UTC m=+3092.193445599" Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.328990 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5cc68dfb67-6l9rd" podStartSLOduration=3.380136326 podStartE2EDuration="27.328972979s" podCreationTimestamp="2026-02-02 11:29:45 +0000 UTC" firstStartedPulling="2026-02-02 11:29:47.060454349 +0000 UTC m=+3066.944647065" lastFinishedPulling="2026-02-02 11:30:11.009291002 +0000 UTC m=+3090.893483718" observedRunningTime="2026-02-02 11:30:12.3195916 +0000 UTC m=+3092.203784316" watchObservedRunningTime="2026-02-02 11:30:12.328972979 +0000 UTC m=+3092.213165695" Feb 02 11:30:13 crc kubenswrapper[4782]: I0202 11:30:13.278698 4782 generic.go:334] "Generic (PLEG): container finished" podID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerID="ef490dddc809af1eb03364d55c17259d24e92c43f23b3151e110c1d9aa74e0a4" exitCode=0 Feb 02 11:30:13 crc kubenswrapper[4782]: I0202 11:30:13.279187 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" event={"ID":"44a78b5b-c712-4b4a-a035-652aea7086d0","Type":"ContainerDied","Data":"ef490dddc809af1eb03364d55c17259d24e92c43f23b3151e110c1d9aa74e0a4"} Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.763884 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.861895 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"44a78b5b-c712-4b4a-a035-652aea7086d0\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.861983 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"44a78b5b-c712-4b4a-a035-652aea7086d0\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.862133 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"44a78b5b-c712-4b4a-a035-652aea7086d0\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.863981 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "44a78b5b-c712-4b4a-a035-652aea7086d0" (UID: "44a78b5b-c712-4b4a-a035-652aea7086d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.887884 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9" (OuterVolumeSpecName: "kube-api-access-qgqc9") pod "44a78b5b-c712-4b4a-a035-652aea7086d0" (UID: "44a78b5b-c712-4b4a-a035-652aea7086d0"). InnerVolumeSpecName "kube-api-access-qgqc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.888722 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44a78b5b-c712-4b4a-a035-652aea7086d0" (UID: "44a78b5b-c712-4b4a-a035-652aea7086d0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.964694 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.964747 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.964762 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.299982 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" event={"ID":"44a78b5b-c712-4b4a-a035-652aea7086d0","Type":"ContainerDied","Data":"a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b"} Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.300030 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b" Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.300036 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.881070 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.894000 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 11:30:16 crc kubenswrapper[4782]: I0202 11:30:16.001802 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:30:16 crc kubenswrapper[4782]: I0202 11:30:16.017722 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:30:16 crc kubenswrapper[4782]: I0202 11:30:16.838147 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" path="/var/lib/kubelet/pods/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa/volumes" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.626924 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.628325 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.945466 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.945524 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:30:22 crc kubenswrapper[4782]: I0202 11:30:22.951124 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:30:22 crc kubenswrapper[4782]: I0202 11:30:22.952612 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:30:24 crc kubenswrapper[4782]: I0202 11:30:24.825426 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:30:26 crc kubenswrapper[4782]: I0202 11:30:26.500818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerStarted","Data":"c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2"} Feb 02 11:30:26 crc kubenswrapper[4782]: I0202 11:30:26.527824 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-p6nkb" podStartSLOduration=3.020594818 podStartE2EDuration="31.527805033s" podCreationTimestamp="2026-02-02 11:29:55 +0000 UTC" firstStartedPulling="2026-02-02 11:29:56.885933557 +0000 UTC m=+3076.770126273" lastFinishedPulling="2026-02-02 11:30:25.393143772 +0000 UTC m=+3105.277336488" observedRunningTime="2026-02-02 11:30:26.519431953 +0000 UTC m=+3106.403624699" watchObservedRunningTime="2026-02-02 11:30:26.527805033 +0000 UTC m=+3106.411997749" Feb 02 11:30:28 crc kubenswrapper[4782]: I0202 11:30:28.629841 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:30:28 crc kubenswrapper[4782]: I0202 11:30:28.947801 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:30:38 crc kubenswrapper[4782]: I0202 11:30:38.629016 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:30:38 crc kubenswrapper[4782]: I0202 11:30:38.946462 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.680186 4782 generic.go:334] "Generic (PLEG): container finished" podID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerID="c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2" exitCode=0 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.680812 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerDied","Data":"c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699298 4782 generic.go:334] "Generic (PLEG): container finished" podID="00eb57a6-b941-443f-9b8a-644c0389b562" containerID="c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699331 4782 generic.go:334] "Generic (PLEG): container finished" podID="00eb57a6-b941-443f-9b8a-644c0389b562" containerID="618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699343 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerDied","Data":"c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerDied","Data":"618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.705996 4782 generic.go:334] "Generic (PLEG): container finished" podID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerID="9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.706031 4782 generic.go:334] "Generic (PLEG): container finished" podID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerID="c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.706065 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerDied","Data":"9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.706091 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerDied","Data":"c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.845708 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.944942 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945034 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945082 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945942 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945992 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.947981 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs" (OuterVolumeSpecName: "logs") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.951149 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g" (OuterVolumeSpecName: "kube-api-access-mxs5g") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "kube-api-access-mxs5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.952794 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.974263 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts" (OuterVolumeSpecName: "scripts") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.978787 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data" (OuterVolumeSpecName: "config-data") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047335 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047373 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047385 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047396 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047405 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.126962 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249383 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249482 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249541 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249585 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249703 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.259957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs" (OuterVolumeSpecName: "logs") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.260364 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf" (OuterVolumeSpecName: "kube-api-access-gsbxf") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "kube-api-access-gsbxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.260705 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.278604 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts" (OuterVolumeSpecName: "scripts") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.294569 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data" (OuterVolumeSpecName: "config-data") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351610 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351697 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351713 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351725 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351735 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.718446 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerDied","Data":"e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd"} Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.718502 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.718512 4782 scope.go:117] "RemoveContainer" containerID="c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.726284 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerDied","Data":"78a7fb858e48a2d7c1668bcef174fb8172e784df949e22963608e184d25f8fba"} Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.726684 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.769601 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.793118 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.802538 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.811915 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.933095 4782 scope.go:117] "RemoveContainer" containerID="618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.960523 4782 scope.go:117] "RemoveContainer" containerID="9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.159750 4782 scope.go:117] "RemoveContainer" containerID="c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.302958 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381194 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381268 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381304 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381359 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.387256 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n" (OuterVolumeSpecName: "kube-api-access-w4s4n") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "kube-api-access-w4s4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.408831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.409137 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data" (OuterVolumeSpecName: "config-data") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.416700 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.483800 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.484079 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.484158 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.484225 4782 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.759696 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerDied","Data":"3bc352515a4e7faf8ad0720cf509467a309a1738f43d6a8ab5058f36de1f92ac"} Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.759718 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.759734 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc352515a4e7faf8ad0720cf509467a309a1738f43d6a8ab5058f36de1f92ac" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.832175 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" path="/var/lib/kubelet/pods/00eb57a6-b941-443f-9b8a-644c0389b562/volumes" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.833495 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" path="/var/lib/kubelet/pods/db3caff6-55ef-4b9f-9d45-15fc834e5974/volumes" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.077429 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.077942 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.077965 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.077980 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerName="collect-profiles" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.077986 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerName="collect-profiles" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078000 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerName="manila-db-sync" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078006 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerName="manila-db-sync" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078026 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078033 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078053 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078059 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078082 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078089 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078281 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerName="manila-db-sync" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078299 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078316 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078330 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078342 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerName="collect-profiles" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078351 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.079603 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.086406 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tzzmn" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.088121 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.088340 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.096508 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.097186 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.109139 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.113056 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201869 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201921 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201982 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202014 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202040 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202096 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202121 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202181 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202233 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202306 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202340 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.207729 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.268425 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.303931 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304116 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304137 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304182 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304254 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304390 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304416 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304442 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304523 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.305695 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.316759 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.316927 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.316963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.328271 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.349132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.349252 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.363765 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.374698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.374906 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.375278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.375873 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.378958 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.379454 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d98f8586f-f76zz"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.390882 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.393222 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.408511 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d98f8586f-f76zz"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.416733 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.442389 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.520668 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.520849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-dns-svc\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.521007 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9679h\" (UniqueName: \"kubernetes.io/projected/cfe77ae5-55f0-440b-b0af-ef3eb1637800-kube-api-access-9679h\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.521087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-config\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.521945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-sb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.522045 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-nb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.593222 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.597598 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.604383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.629178 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.629308 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-dns-svc\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9679h\" (UniqueName: \"kubernetes.io/projected/cfe77ae5-55f0-440b-b0af-ef3eb1637800-kube-api-access-9679h\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630383 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-config\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630415 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-sb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630461 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-nb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.633048 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.633692 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-dns-svc\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.634307 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-config\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.634856 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-nb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.635069 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-sb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.642671 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.671020 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9679h\" (UniqueName: \"kubernetes.io/projected/cfe77ae5-55f0-440b-b0af-ef3eb1637800-kube-api-access-9679h\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735141 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735665 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735703 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735825 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.834729 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837794 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837931 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.838007 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.838034 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.838968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.841302 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.850177 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.850328 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.850592 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.883688 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.899805 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.936859 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.543333 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.602150 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:30:46 crc kubenswrapper[4782]: W0202 11:30:46.634750 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe03de2e_2ddc_4cb1_b5be_7adb4add6582.slice/crio-08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d WatchSource:0}: Error finding container 08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d: Status 404 returned error can't find the container with id 08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.733391 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d98f8586f-f76zz"] Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.881497 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerStarted","Data":"8540ed31a6d2b8e3e589043ccf8a1a2071b1ba7d96df1fa53995124ff3fbc8af"} Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.892128 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerStarted","Data":"08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d"} Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.901364 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" event={"ID":"cfe77ae5-55f0-440b-b0af-ef3eb1637800","Type":"ContainerStarted","Data":"cda8335e7393aac6701704b8aab1a930c572e0290b52a6bcda437cd1fbdaae4a"} Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.922073 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:46 crc kubenswrapper[4782]: W0202 11:30:46.929562 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc207707b_d720_4bfd_b93a_23ff4bc42674.slice/crio-326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83 WatchSource:0}: Error finding container 326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83: Status 404 returned error can't find the container with id 326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83 Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.976834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerStarted","Data":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.977510 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerStarted","Data":"326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83"} Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.980623 4782 generic.go:334] "Generic (PLEG): container finished" podID="cfe77ae5-55f0-440b-b0af-ef3eb1637800" containerID="8db8ae3957cadf7a36e98fb5d63df37effcb1da01ece45c4f98976f5289eccef" exitCode=0 Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.980665 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" event={"ID":"cfe77ae5-55f0-440b-b0af-ef3eb1637800","Type":"ContainerDied","Data":"8db8ae3957cadf7a36e98fb5d63df37effcb1da01ece45c4f98976f5289eccef"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.029719 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.036893 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerStarted","Data":"b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.051225 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" event={"ID":"cfe77ae5-55f0-440b-b0af-ef3eb1637800","Type":"ContainerStarted","Data":"443068a12d7a85105538997e50775a3f8cfa1163a74495460639983cb262a4d9"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.051515 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.064171 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerStarted","Data":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.064694 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.088442 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" podStartSLOduration=4.08841601 podStartE2EDuration="4.08841601s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:30:49.083183159 +0000 UTC m=+3128.967375875" watchObservedRunningTime="2026-02-02 11:30:49.08841601 +0000 UTC m=+3128.972608726" Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.127885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.127863423 podStartE2EDuration="4.127863423s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:30:49.1141965 +0000 UTC m=+3128.998389226" watchObservedRunningTime="2026-02-02 11:30:49.127863423 +0000 UTC m=+3129.012056139" Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.086527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerStarted","Data":"cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43"} Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.086723 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" containerID="cri-o://ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" gracePeriod=30 Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.086795 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" containerID="cri-o://89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" gracePeriod=30 Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.118093 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.073222382 podStartE2EDuration="5.118076093s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="2026-02-02 11:30:46.557591253 +0000 UTC m=+3126.441783969" lastFinishedPulling="2026-02-02 11:30:47.602444964 +0000 UTC m=+3127.486637680" observedRunningTime="2026-02-02 11:30:50.117135176 +0000 UTC m=+3130.001327902" watchObservedRunningTime="2026-02-02 11:30:50.118076093 +0000 UTC m=+3130.002268809" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.024902 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.111657 4782 generic.go:334] "Generic (PLEG): container finished" podID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" exitCode=0 Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.111693 4782 generic.go:334] "Generic (PLEG): container finished" podID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" exitCode=143 Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112558 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112577 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerDied","Data":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112665 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerDied","Data":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112680 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerDied","Data":"326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83"} Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112699 4782 scope.go:117] "RemoveContainer" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123403 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123451 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123546 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123609 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123727 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123825 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123870 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123969 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs" (OuterVolumeSpecName: "logs") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.124018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.124335 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.124352 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.133014 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.133597 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj" (OuterVolumeSpecName: "kube-api-access-2rdmj") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "kube-api-access-2rdmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.134343 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts" (OuterVolumeSpecName: "scripts") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.185166 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.207536 4782 scope.go:117] "RemoveContainer" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225736 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225772 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225782 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225789 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.263867 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data" (OuterVolumeSpecName: "config-data") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.277494 4782 scope.go:117] "RemoveContainer" containerID="dca388f48a923df889d89ab8317f39bb415b2f6f2849925bcccbaf4b1c7171f9" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.328186 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.330120 4782 scope.go:117] "RemoveContainer" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.330778 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": container with ID starting with 89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a not found: ID does not exist" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.330828 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} err="failed to get container status \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": rpc error: code = NotFound desc = could not find container \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": container with ID starting with 89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.330855 4782 scope.go:117] "RemoveContainer" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.332762 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": container with ID starting with ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23 not found: ID does not exist" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.332790 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} err="failed to get container status \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": rpc error: code = NotFound desc = could not find container \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": container with ID starting with ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23 not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.333026 4782 scope.go:117] "RemoveContainer" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.333664 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} err="failed to get container status \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": rpc error: code = NotFound desc = could not find container \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": container with ID starting with 89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.333682 4782 scope.go:117] "RemoveContainer" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.334361 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} err="failed to get container status \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": rpc error: code = NotFound desc = could not find container \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": container with ID starting with ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23 not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.469554 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.491411 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.510800 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.511307 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511323 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.511354 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511361 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511569 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511592 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.512586 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.517945 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.518147 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.541064 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.556243 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638163 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbslk\" (UniqueName: \"kubernetes.io/projected/2af78116-7ef2-4447-b552-7b0d2eaedf90-kube-api-access-lbslk\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638248 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data-custom\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638374 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638414 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-scripts\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638476 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2af78116-7ef2-4447-b552-7b0d2eaedf90-etc-machine-id\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638501 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af78116-7ef2-4447-b552-7b0d2eaedf90-logs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638549 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-public-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638575 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638626 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740261 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740616 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-scripts\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740690 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2af78116-7ef2-4447-b552-7b0d2eaedf90-etc-machine-id\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740714 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af78116-7ef2-4447-b552-7b0d2eaedf90-logs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740747 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-public-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740770 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740815 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbslk\" (UniqueName: \"kubernetes.io/projected/2af78116-7ef2-4447-b552-7b0d2eaedf90-kube-api-access-lbslk\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data-custom\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.741295 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2af78116-7ef2-4447-b552-7b0d2eaedf90-etc-machine-id\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.741998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af78116-7ef2-4447-b552-7b0d2eaedf90-logs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.748852 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.749351 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.750121 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-scripts\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.750264 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.757417 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-public-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.758062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data-custom\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.764328 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbslk\" (UniqueName: \"kubernetes.io/projected/2af78116-7ef2-4447-b552-7b0d2eaedf90-kube-api-access-lbslk\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.857545 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.763885 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:52 crc kubenswrapper[4782]: W0202 11:30:52.783753 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af78116_7ef2_4447_b552_7b0d2eaedf90.slice/crio-aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9 WatchSource:0}: Error finding container aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9: Status 404 returned error can't find the container with id aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9 Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.940359 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" path="/var/lib/kubelet/pods/c207707b-d720-4bfd-b93a-23ff4bc42674/volumes" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.953231 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.953305 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.957079 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.958902 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.958970 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" gracePeriod=600 Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.186001 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" exitCode=0 Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.186222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2"} Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.186373 4782 scope.go:117] "RemoveContainer" containerID="6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.188540 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2af78116-7ef2-4447-b552-7b0d2eaedf90","Type":"ContainerStarted","Data":"aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9"} Feb 02 11:30:53 crc kubenswrapper[4782]: E0202 11:30:53.292867 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.637854 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.638268 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.639145 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11"} pod="openstack/horizon-78d997b864-7sqws" containerMessage="Container horizon failed startup probe, will be restarted" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.639184 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" containerID="cri-o://09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11" gracePeriod=30 Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.957943 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.958104 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.959720 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f"} pod="openstack/horizon-5665456548-9x6qh" containerMessage="Container horizon failed startup probe, will be restarted" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.959780 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" containerID="cri-o://d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f" gracePeriod=30 Feb 02 11:30:54 crc kubenswrapper[4782]: I0202 11:30:54.212452 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:30:54 crc kubenswrapper[4782]: E0202 11:30:54.213276 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:30:54 crc kubenswrapper[4782]: I0202 11:30:54.222392 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2af78116-7ef2-4447-b552-7b0d2eaedf90","Type":"ContainerStarted","Data":"6b4f2c34a0ea2ad76544a901a935d8f300f1c7b3face6a0d1253d41b02debbd3"} Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.015885 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016537 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" containerID="cri-o://46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016601 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" containerID="cri-o://bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016713 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" containerID="cri-o://ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016815 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" containerID="cri-o://f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.236569 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47" exitCode=2 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.236628 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47"} Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.247016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2af78116-7ef2-4447-b552-7b0d2eaedf90","Type":"ContainerStarted","Data":"1811fdaf97c38e80672531dd87e0f9e75eb189569eee430c0fc51673b5a6fd78"} Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.247288 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.295549 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.295529953 podStartE2EDuration="4.295529953s" podCreationTimestamp="2026-02-02 11:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:30:55.283289302 +0000 UTC m=+3135.167482018" watchObservedRunningTime="2026-02-02 11:30:55.295529953 +0000 UTC m=+3135.179722669" Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.443339 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.837690 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:55.960745 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:55.960963 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" containerID="cri-o://699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224" gracePeriod=10 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.290888 4782 generic.go:334] "Generic (PLEG): container finished" podID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerID="699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.290967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerDied","Data":"699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224"} Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.302690 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.302716 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.302740 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.303859 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976"} Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.303964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc"} Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.303981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607"} Feb 02 11:30:57 crc kubenswrapper[4782]: I0202 11:30:57.204747 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.195:5353: connect: connection refused" Feb 02 11:30:58 crc kubenswrapper[4782]: I0202 11:30:58.323264 4782 generic.go:334] "Generic (PLEG): container finished" podID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerID="09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11" exitCode=0 Feb 02 11:30:58 crc kubenswrapper[4782]: I0202 11:30:58.323364 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11"} Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.338400 4782 generic.go:334] "Generic (PLEG): container finished" podID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerID="d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f" exitCode=0 Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.338902 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerDied","Data":"d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f"} Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.750008 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.871563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872016 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872781 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872934 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872988 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.873023 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.873137 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.873162 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872520 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.876182 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.878483 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts" (OuterVolumeSpecName: "scripts") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.890710 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm" (OuterVolumeSpecName: "kube-api-access-d8ztm") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "kube-api-access-d8ztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.977711 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.978056 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.978070 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.978081 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.055961 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.080550 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.160024 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.160057 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.177775 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.196967 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.196995 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.241060 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data" (OuterVolumeSpecName: "config-data") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298048 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298106 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298170 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298204 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298798 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.313950 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j" (OuterVolumeSpecName: "kube-api-access-fks5j") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "kube-api-access-fks5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.363086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerDied","Data":"65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.363117 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.363760 4782 scope.go:117] "RemoveContainer" containerID="699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.376522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.376574 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.377389 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.382892 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config" (OuterVolumeSpecName: "config") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.382981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"6a4b12ba3f23d6e7e4363c3be7c096d829988d83db73bb8c3d10e0efdb2f7cc6"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.382528 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.398478 4782 scope.go:117] "RemoveContainer" containerID="6f20530bb72c77a28a0b759dfecb8abeba2d4c4c9ec2b1e203807cb88c440c27" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400433 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400452 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400462 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400472 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.404432 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.406124 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.444485 4782 scope.go:117] "RemoveContainer" containerID="f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.445585 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.497609 4782 scope.go:117] "RemoveContainer" containerID="bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.504749 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.504778 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.514565 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.536873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.545071 4782 scope.go:117] "RemoveContainer" containerID="ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.546705 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547206 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547310 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547375 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="init" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547461 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="init" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547516 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547567 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547622 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547704 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547763 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547814 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547869 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547917 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548186 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548256 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548320 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548408 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548488 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.550865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.556182 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.556238 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.556465 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.562612 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609583 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609874 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609979 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610128 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610967 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.715861 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.715924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.715987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716148 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716188 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716213 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.717335 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.717705 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.722139 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.725956 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.742368 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.752656 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.759341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.759457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.844407 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" path="/var/lib/kubelet/pods/497f3642-7f3b-417c-aa52-2ed3ddbcac75/volumes" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.886223 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.901826 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.907383 4782 scope.go:117] "RemoveContainer" containerID="46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.916149 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:31:01 crc kubenswrapper[4782]: I0202 11:31:01.420584 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerStarted","Data":"8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213"} Feb 02 11:31:01 crc kubenswrapper[4782]: I0202 11:31:01.516457 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.445587 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerStarted","Data":"211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834"} Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.450678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"1a51c45b57e3fe68cc34a30ee9e80c20fa6fe136d2e6a3325715aaa301b5e1bb"} Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.477140 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.37734577 podStartE2EDuration="17.477115566s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="2026-02-02 11:30:46.660043326 +0000 UTC m=+3126.544236052" lastFinishedPulling="2026-02-02 11:30:59.759813132 +0000 UTC m=+3139.644005848" observedRunningTime="2026-02-02 11:31:02.47342976 +0000 UTC m=+3142.357622476" watchObservedRunningTime="2026-02-02 11:31:02.477115566 +0000 UTC m=+3142.361308282" Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.838819 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" path="/var/lib/kubelet/pods/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1/volumes" Feb 02 11:31:03 crc kubenswrapper[4782]: I0202 11:31:03.485184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c"} Feb 02 11:31:03 crc kubenswrapper[4782]: I0202 11:31:03.485627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3"} Feb 02 11:31:04 crc kubenswrapper[4782]: I0202 11:31:04.504192 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa"} Feb 02 11:31:05 crc kubenswrapper[4782]: I0202 11:31:05.418657 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 11:31:06 crc kubenswrapper[4782]: I0202 11:31:06.829796 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:06 crc kubenswrapper[4782]: E0202 11:31:06.831046 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:07 crc kubenswrapper[4782]: I0202 11:31:07.531721 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307"} Feb 02 11:31:07 crc kubenswrapper[4782]: I0202 11:31:07.532025 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:31:07 crc kubenswrapper[4782]: I0202 11:31:07.560184 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.354439471 podStartE2EDuration="7.560162724s" podCreationTimestamp="2026-02-02 11:31:00 +0000 UTC" firstStartedPulling="2026-02-02 11:31:01.524366781 +0000 UTC m=+3141.408559497" lastFinishedPulling="2026-02-02 11:31:06.730090034 +0000 UTC m=+3146.614282750" observedRunningTime="2026-02-02 11:31:07.551227237 +0000 UTC m=+3147.435419953" watchObservedRunningTime="2026-02-02 11:31:07.560162724 +0000 UTC m=+3147.444355440" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.605619 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.627137 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.627188 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.648666 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.945472 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.945922 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:09 crc kubenswrapper[4782]: I0202 11:31:09.550193 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" containerID="cri-o://b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03" gracePeriod=30 Feb 02 11:31:09 crc kubenswrapper[4782]: I0202 11:31:09.550976 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" containerID="cri-o://cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43" gracePeriod=30 Feb 02 11:31:10 crc kubenswrapper[4782]: I0202 11:31:10.561897 4782 generic.go:334] "Generic (PLEG): container finished" podID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerID="cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43" exitCode=0 Feb 02 11:31:10 crc kubenswrapper[4782]: I0202 11:31:10.561975 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerDied","Data":"cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43"} Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.618965 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619295 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" containerID="cri-o://08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3" gracePeriod=30 Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619378 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" containerID="cri-o://a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c" gracePeriod=30 Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619394 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" containerID="cri-o://c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa" gracePeriod=30 Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619567 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" containerID="cri-o://ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307" gracePeriod=30 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.587370 4782 generic.go:334] "Generic (PLEG): container finished" podID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerID="b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.587764 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerDied","Data":"b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599859 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599906 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa" exitCode=2 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599918 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599932 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599985 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599999 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.600012 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.619288 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701409 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701690 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701783 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701898 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701945 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.702086 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.702285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.702358 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.705150 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.707012 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.718627 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts" (OuterVolumeSpecName: "scripts") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.718819 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj" (OuterVolumeSpecName: "kube-api-access-7z4pj") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "kube-api-access-7z4pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856220 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856261 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856278 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856288 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.882368 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.887497 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.958792 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.959152 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.975588 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.062027 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.110522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data" (OuterVolumeSpecName: "config-data") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.163765 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.260793 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369789 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369888 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.370704 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.370860 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.370889 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.371377 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.378226 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.379923 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts" (OuterVolumeSpecName: "scripts") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.382912 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn" (OuterVolumeSpecName: "kube-api-access-6fjnn") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "kube-api-access-6fjnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.439763 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476044 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476082 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476092 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476100 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.500865 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data" (OuterVolumeSpecName: "config-data") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.578231 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.610937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerDied","Data":"8540ed31a6d2b8e3e589043ccf8a1a2071b1ba7d96df1fa53995124ff3fbc8af"} Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.611301 4782 scope.go:117] "RemoveContainer" containerID="cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.610975 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.624832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"1a51c45b57e3fe68cc34a30ee9e80c20fa6fe136d2e6a3325715aaa301b5e1bb"} Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.624883 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.667825 4782 scope.go:117] "RemoveContainer" containerID="b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.675241 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.690640 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.700437 4782 scope.go:117] "RemoveContainer" containerID="ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.710085 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.720367 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.729725 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730223 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730243 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730251 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730259 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730286 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730295 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730303 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730310 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730331 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730339 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730357 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730364 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730571 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730589 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730604 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730616 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730633 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730649 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.731832 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.743844 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.744068 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.752742 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.755644 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.762312 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.762385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.762313 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.763702 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.780087 4782 scope.go:117] "RemoveContainer" containerID="c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.869733 4782 scope.go:117] "RemoveContainer" containerID="a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.884488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e465ef3-3141-429f-927f-db1eabdff230-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.884797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-scripts\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.889843 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.889952 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4ssc\" (UniqueName: \"kubernetes.io/projected/5cbff496-9e10-4868-ab32-849a8b238474-kube-api-access-n4ssc\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.890086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-config-data\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.890191 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891216 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891355 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-run-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891481 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-scripts\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891587 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2qsl\" (UniqueName: \"kubernetes.io/projected/6e465ef3-3141-429f-927f-db1eabdff230-kube-api-access-m2qsl\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891689 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.892011 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-log-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.892115 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.904049 4782 scope.go:117] "RemoveContainer" containerID="08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.993904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.993974 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-log-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994000 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e465ef3-3141-429f-927f-db1eabdff230-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994082 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-scripts\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994096 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4ssc\" (UniqueName: \"kubernetes.io/projected/5cbff496-9e10-4868-ab32-849a8b238474-kube-api-access-n4ssc\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994139 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-config-data\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994156 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994202 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994238 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-run-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-scripts\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994297 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2qsl\" (UniqueName: \"kubernetes.io/projected/6e465ef3-3141-429f-927f-db1eabdff230-kube-api-access-m2qsl\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.995114 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e465ef3-3141-429f-927f-db1eabdff230-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.997179 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-run-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.998336 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-log-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.000462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-config-data\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.004753 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.006400 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-scripts\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.011990 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.012928 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.015327 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-scripts\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.019457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4ssc\" (UniqueName: \"kubernetes.io/projected/5cbff496-9e10-4868-ab32-849a8b238474-kube-api-access-n4ssc\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.019814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.023387 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.033460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.053253 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2qsl\" (UniqueName: \"kubernetes.io/projected/6e465ef3-3141-429f-927f-db1eabdff230-kube-api-access-m2qsl\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.187461 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.189250 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.530895 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.799781 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.845425 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" path="/var/lib/kubelet/pods/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a/volumes" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.846252 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" path="/var/lib/kubelet/pods/e675b2b1-c562-4e86-a104-9d16b83b8dc3/volumes" Feb 02 11:31:15 crc kubenswrapper[4782]: I0202 11:31:15.041475 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:15 crc kubenswrapper[4782]: W0202 11:31:15.056769 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cbff496_9e10_4868_ab32_849a8b238474.slice/crio-9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2 WatchSource:0}: Error finding container 9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2: Status 404 returned error can't find the container with id 9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2 Feb 02 11:31:15 crc kubenswrapper[4782]: I0202 11:31:15.669854 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6e465ef3-3141-429f-927f-db1eabdff230","Type":"ContainerStarted","Data":"6a232de3971540c0f74cae967f197fd2f2095afb9ccd175b9d5701fb39aefb84"} Feb 02 11:31:15 crc kubenswrapper[4782]: I0202 11:31:15.671795 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2"} Feb 02 11:31:16 crc kubenswrapper[4782]: I0202 11:31:16.684338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6e465ef3-3141-429f-927f-db1eabdff230","Type":"ContainerStarted","Data":"09f759fd2b011d23b48d6b79a4ac12bafd6286e720120fbe8ce6b8ecacc447e2"} Feb 02 11:31:16 crc kubenswrapper[4782]: I0202 11:31:16.684939 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6e465ef3-3141-429f-927f-db1eabdff230","Type":"ContainerStarted","Data":"a4308919fc2017fbe6c4d0bd48ccd06e0007da63b0f58790c345fb7647ddd51b"} Feb 02 11:31:16 crc kubenswrapper[4782]: I0202 11:31:16.687198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"9c2b46a5fb6e243e563f4736d3a8daca867f5bcb670920c92e635f0899570290"} Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.688127 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.707135 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"538429587e15dbebb413691b8fc1d30c67fd693f14506e6af4aa454f4e50ab8e"} Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.709383 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.709365503 podStartE2EDuration="4.709365503s" podCreationTimestamp="2026-02-02 11:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:31:16.714190879 +0000 UTC m=+3156.598383605" watchObservedRunningTime="2026-02-02 11:31:17.709365503 +0000 UTC m=+3157.593558219" Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.770122 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.821638 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:17 crc kubenswrapper[4782]: E0202 11:31:17.821962 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.629710 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.716914 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"061a8f9cee308c41e41a30c18c574462297aa061337f4afd80fc92856b2ac5d1"} Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.717121 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" containerID="cri-o://8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213" gracePeriod=30 Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.717171 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" containerID="cri-o://211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834" gracePeriod=30 Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.947167 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737073 4782 generic.go:334] "Generic (PLEG): container finished" podID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerID="211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834" exitCode=0 Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737404 4782 generic.go:334] "Generic (PLEG): container finished" podID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerID="8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213" exitCode=1 Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerDied","Data":"211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834"} Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737458 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerDied","Data":"8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213"} Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.209260 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359089 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359200 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359232 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359501 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359596 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359632 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359686 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.361097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.361179 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.368921 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph" (OuterVolumeSpecName: "ceph") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.369064 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.371786 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm" (OuterVolumeSpecName: "kube-api-access-mhpfm") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "kube-api-access-mhpfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.385977 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts" (OuterVolumeSpecName: "scripts") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.454897 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.461958 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.461999 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462014 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462028 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462038 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462050 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462064 4782 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.520477 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data" (OuterVolumeSpecName: "config-data") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.564292 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.768376 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerDied","Data":"08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d"} Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.769616 4782 scope.go:117] "RemoveContainer" containerID="211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.768393 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.801125 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"ccbf21f42532ea2654c6007d5b2985ad8c55bd36c3e95b5de1de9290a4a613c8"} Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.801750 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.832981 4782 scope.go:117] "RemoveContainer" containerID="8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.965074 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.3640915209999998 podStartE2EDuration="7.965057826s" podCreationTimestamp="2026-02-02 11:31:13 +0000 UTC" firstStartedPulling="2026-02-02 11:31:15.075608269 +0000 UTC m=+3154.959800995" lastFinishedPulling="2026-02-02 11:31:19.676574584 +0000 UTC m=+3159.560767300" observedRunningTime="2026-02-02 11:31:20.895160247 +0000 UTC m=+3160.779352973" watchObservedRunningTime="2026-02-02 11:31:20.965057826 +0000 UTC m=+3160.849250542" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.991328 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.053873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.070715 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: E0202 11:31:21.071195 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071217 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" Feb 02 11:31:21 crc kubenswrapper[4782]: E0202 11:31:21.071239 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071246 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071411 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071438 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.072401 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.080211 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.082860 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.207905 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208282 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208315 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208349 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208390 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-scripts\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208561 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2ss\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-kube-api-access-nq2ss\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208597 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-ceph\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208622 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.310642 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2ss\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-kube-api-access-nq2ss\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311041 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-ceph\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311141 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311375 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311461 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311669 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-scripts\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.312841 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.312983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.324467 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-scripts\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.324902 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.325191 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.325306 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-ceph\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.336558 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.360244 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2ss\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-kube-api-access-nq2ss\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.400314 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.937927 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:22 crc kubenswrapper[4782]: I0202 11:31:22.845511 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" path="/var/lib/kubelet/pods/be03de2e-2ddc-4cb1-b5be-7adb4add6582/volumes" Feb 02 11:31:22 crc kubenswrapper[4782]: I0202 11:31:22.854926 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"04aa7a3f-6353-4317-8825-1447f8a88842","Type":"ContainerStarted","Data":"49ae4d7efbdb39580241566cc9d6cdc3e54913589dec64371bc05d9dc27103b7"} Feb 02 11:31:22 crc kubenswrapper[4782]: I0202 11:31:22.854974 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"04aa7a3f-6353-4317-8825-1447f8a88842","Type":"ContainerStarted","Data":"dea5bebdf56912f1382c7285fb8e0288248bd9fc7aa62131e841b354c5b1e9b4"} Feb 02 11:31:23 crc kubenswrapper[4782]: I0202 11:31:23.865184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"04aa7a3f-6353-4317-8825-1447f8a88842","Type":"ContainerStarted","Data":"43415fcef60a234b1660bdd058149b4207de7a075a7e69fc616a09a4c4884368"} Feb 02 11:31:23 crc kubenswrapper[4782]: I0202 11:31:23.905135 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.9051153899999997 podStartE2EDuration="3.90511539s" podCreationTimestamp="2026-02-02 11:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:31:23.896845422 +0000 UTC m=+3163.781038138" watchObservedRunningTime="2026-02-02 11:31:23.90511539 +0000 UTC m=+3163.789308106" Feb 02 11:31:24 crc kubenswrapper[4782]: I0202 11:31:24.189455 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 11:31:28 crc kubenswrapper[4782]: I0202 11:31:28.627917 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:31:28 crc kubenswrapper[4782]: I0202 11:31:28.946732 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:31:30 crc kubenswrapper[4782]: I0202 11:31:30.827522 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:30 crc kubenswrapper[4782]: E0202 11:31:30.828015 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:31 crc kubenswrapper[4782]: I0202 11:31:31.401343 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 11:31:36 crc kubenswrapper[4782]: I0202 11:31:36.312115 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 11:31:42 crc kubenswrapper[4782]: I0202 11:31:42.082295 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:42 crc kubenswrapper[4782]: I0202 11:31:42.111817 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:43 crc kubenswrapper[4782]: I0202 11:31:43.491005 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 11:31:43 crc kubenswrapper[4782]: I0202 11:31:43.821201 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:43 crc kubenswrapper[4782]: E0202 11:31:43.821487 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.110779 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.119436 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.222338 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.271002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 11:31:45 crc kubenswrapper[4782]: I0202 11:31:45.065098 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" containerID="cri-o://d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770" gracePeriod=30 Feb 02 11:31:45 crc kubenswrapper[4782]: I0202 11:31:45.065219 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" containerID="cri-o://68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae" gracePeriod=30 Feb 02 11:31:48 crc kubenswrapper[4782]: I0202 11:31:48.639383 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:56172->10.217.0.242:8443: read: connection reset by peer" Feb 02 11:31:49 crc kubenswrapper[4782]: I0202 11:31:49.099094 4782 generic.go:334] "Generic (PLEG): container finished" podID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerID="68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae" exitCode=0 Feb 02 11:31:49 crc kubenswrapper[4782]: I0202 11:31:49.099156 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae"} Feb 02 11:31:49 crc kubenswrapper[4782]: I0202 11:31:49.099484 4782 scope.go:117] "RemoveContainer" containerID="09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11" Feb 02 11:31:57 crc kubenswrapper[4782]: I0202 11:31:57.821882 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:57 crc kubenswrapper[4782]: E0202 11:31:57.822476 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:58 crc kubenswrapper[4782]: I0202 11:31:58.628342 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:32:08 crc kubenswrapper[4782]: I0202 11:32:08.628088 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:32:08 crc kubenswrapper[4782]: I0202 11:32:08.628742 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:32:10 crc kubenswrapper[4782]: I0202 11:32:10.829560 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:10 crc kubenswrapper[4782]: E0202 11:32:10.830119 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.409120 4782 generic.go:334] "Generic (PLEG): container finished" podID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerID="d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770" exitCode=137 Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.409295 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770"} Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.519280 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682225 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682293 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682363 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682538 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682579 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682629 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682691 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682840 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs" (OuterVolumeSpecName: "logs") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.683106 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.689124 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.695489 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf" (OuterVolumeSpecName: "kube-api-access-6xwsf") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "kube-api-access-6xwsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.742555 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.744283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data" (OuterVolumeSpecName: "config-data") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.757608 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts" (OuterVolumeSpecName: "scripts") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786022 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786067 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786081 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786092 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786103 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.800874 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.888286 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.419846 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"d24d9db9a798247d6fbcd136dc3f9d15a710d6aee5946c313b4ac9b4fb5bc96d"} Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.419987 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.420744 4782 scope.go:117] "RemoveContainer" containerID="68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.459893 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.470530 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.611043 4782 scope.go:117] "RemoveContainer" containerID="d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.832320 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" path="/var/lib/kubelet/pods/62cd5c24-315a-45c1-bca8-08696f1080cd/volumes" Feb 02 11:32:24 crc kubenswrapper[4782]: I0202 11:32:24.820985 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:24 crc kubenswrapper[4782]: E0202 11:32:24.821763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:36 crc kubenswrapper[4782]: I0202 11:32:36.821497 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:36 crc kubenswrapper[4782]: E0202 11:32:36.822374 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.416349 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.423155 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423194 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.423218 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423227 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.423268 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423277 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423531 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423548 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423561 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.424433 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.429120 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.429270 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.430079 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.431100 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvl62" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.434085 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.506320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.506716 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.506809 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609491 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609846 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609944 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610012 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610169 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610214 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610970 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.611855 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.622000 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.713572 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.713784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.713900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714109 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714172 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.715075 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.716270 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.718723 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.719291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.730583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.746749 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.821666 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.822242 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:48 crc kubenswrapper[4782]: I0202 11:32:48.048905 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:32:48 crc kubenswrapper[4782]: I0202 11:32:48.499042 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:32:48 crc kubenswrapper[4782]: I0202 11:32:48.692048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerStarted","Data":"165e18b6fa145b8da7ea6159f1baabb2c9b6bfa2ecbd382cecfe714965ca36c1"} Feb 02 11:32:58 crc kubenswrapper[4782]: I0202 11:32:58.824052 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:58 crc kubenswrapper[4782]: E0202 11:32:58.825132 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:09 crc kubenswrapper[4782]: I0202 11:33:09.821796 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:09 crc kubenswrapper[4782]: E0202 11:33:09.823022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:21 crc kubenswrapper[4782]: I0202 11:33:21.823163 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:21 crc kubenswrapper[4782]: E0202 11:33:21.825010 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:29 crc kubenswrapper[4782]: E0202 11:33:29.258919 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 02 11:33:29 crc kubenswrapper[4782]: E0202 11:33:29.262708 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a5a266a5-ac00-49e1-9443-def4cebe65ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:33:29 crc kubenswrapper[4782]: E0202 11:33:29.263984 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" Feb 02 11:33:30 crc kubenswrapper[4782]: E0202 11:33:30.137939 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" Feb 02 11:33:34 crc kubenswrapper[4782]: I0202 11:33:34.821285 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:34 crc kubenswrapper[4782]: E0202 11:33:34.822058 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:44 crc kubenswrapper[4782]: I0202 11:33:44.049951 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 11:33:46 crc kubenswrapper[4782]: I0202 11:33:46.274871 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerStarted","Data":"762b4b5b8e241a2a3f60ed6176e6adc0554b048edbcf9782fa78026e47e66f14"} Feb 02 11:33:46 crc kubenswrapper[4782]: I0202 11:33:46.297047 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.757753717 podStartE2EDuration="1m0.297026302s" podCreationTimestamp="2026-02-02 11:32:46 +0000 UTC" firstStartedPulling="2026-02-02 11:32:48.507980546 +0000 UTC m=+3248.392173252" lastFinishedPulling="2026-02-02 11:33:44.047253121 +0000 UTC m=+3303.931445837" observedRunningTime="2026-02-02 11:33:46.295617081 +0000 UTC m=+3306.179809797" watchObservedRunningTime="2026-02-02 11:33:46.297026302 +0000 UTC m=+3306.181219018" Feb 02 11:33:48 crc kubenswrapper[4782]: I0202 11:33:48.821941 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:48 crc kubenswrapper[4782]: E0202 11:33:48.823250 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.123905 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.126582 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.146422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.250216 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.250330 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.250602 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353171 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353884 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.354206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.375570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.452163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.044596 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.330229 4782 generic.go:334] "Generic (PLEG): container finished" podID="84d1feed-8d12-41b5-8606-4ac037256f14" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" exitCode=0 Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.330275 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7"} Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.330335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerStarted","Data":"df451c3fc192c5378c3dad8d8c95604469154b7e53f7269d58d6c31fac3aa873"} Feb 02 11:33:52 crc kubenswrapper[4782]: I0202 11:33:52.341036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerStarted","Data":"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d"} Feb 02 11:33:53 crc kubenswrapper[4782]: I0202 11:33:53.353743 4782 generic.go:334] "Generic (PLEG): container finished" podID="84d1feed-8d12-41b5-8606-4ac037256f14" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" exitCode=0 Feb 02 11:33:53 crc kubenswrapper[4782]: I0202 11:33:53.353803 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d"} Feb 02 11:33:54 crc kubenswrapper[4782]: I0202 11:33:54.365034 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerStarted","Data":"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769"} Feb 02 11:33:54 crc kubenswrapper[4782]: I0202 11:33:54.395365 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ff29q" podStartSLOduration=1.968060234 podStartE2EDuration="4.395338805s" podCreationTimestamp="2026-02-02 11:33:50 +0000 UTC" firstStartedPulling="2026-02-02 11:33:51.332876343 +0000 UTC m=+3311.217069059" lastFinishedPulling="2026-02-02 11:33:53.760154914 +0000 UTC m=+3313.644347630" observedRunningTime="2026-02-02 11:33:54.388096546 +0000 UTC m=+3314.272289262" watchObservedRunningTime="2026-02-02 11:33:54.395338805 +0000 UTC m=+3314.279531521" Feb 02 11:34:00 crc kubenswrapper[4782]: I0202 11:34:00.453241 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:00 crc kubenswrapper[4782]: I0202 11:34:00.453858 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:00 crc kubenswrapper[4782]: I0202 11:34:00.504322 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:01 crc kubenswrapper[4782]: I0202 11:34:01.499022 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:01 crc kubenswrapper[4782]: I0202 11:34:01.560615 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:34:01 crc kubenswrapper[4782]: I0202 11:34:01.822165 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:01 crc kubenswrapper[4782]: E0202 11:34:01.822518 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:03 crc kubenswrapper[4782]: I0202 11:34:03.460759 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ff29q" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" containerID="cri-o://677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" gracePeriod=2 Feb 02 11:34:03 crc kubenswrapper[4782]: I0202 11:34:03.932710 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.050618 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"84d1feed-8d12-41b5-8606-4ac037256f14\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.050994 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"84d1feed-8d12-41b5-8606-4ac037256f14\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.051119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"84d1feed-8d12-41b5-8606-4ac037256f14\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.051360 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities" (OuterVolumeSpecName: "utilities") pod "84d1feed-8d12-41b5-8606-4ac037256f14" (UID: "84d1feed-8d12-41b5-8606-4ac037256f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.052303 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.059902 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n" (OuterVolumeSpecName: "kube-api-access-bxh7n") pod "84d1feed-8d12-41b5-8606-4ac037256f14" (UID: "84d1feed-8d12-41b5-8606-4ac037256f14"). InnerVolumeSpecName "kube-api-access-bxh7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.072389 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84d1feed-8d12-41b5-8606-4ac037256f14" (UID: "84d1feed-8d12-41b5-8606-4ac037256f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.154015 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.154249 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470866 4782 generic.go:334] "Generic (PLEG): container finished" podID="84d1feed-8d12-41b5-8606-4ac037256f14" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" exitCode=0 Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769"} Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470932 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"df451c3fc192c5378c3dad8d8c95604469154b7e53f7269d58d6c31fac3aa873"} Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470960 4782 scope.go:117] "RemoveContainer" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470958 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.491770 4782 scope.go:117] "RemoveContainer" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.508367 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.521422 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.532695 4782 scope.go:117] "RemoveContainer" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.566738 4782 scope.go:117] "RemoveContainer" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" Feb 02 11:34:04 crc kubenswrapper[4782]: E0202 11:34:04.567236 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769\": container with ID starting with 677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769 not found: ID does not exist" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567266 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769"} err="failed to get container status \"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769\": rpc error: code = NotFound desc = could not find container \"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769\": container with ID starting with 677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769 not found: ID does not exist" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567301 4782 scope.go:117] "RemoveContainer" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" Feb 02 11:34:04 crc kubenswrapper[4782]: E0202 11:34:04.567818 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d\": container with ID starting with e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d not found: ID does not exist" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567837 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d"} err="failed to get container status \"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d\": rpc error: code = NotFound desc = could not find container \"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d\": container with ID starting with e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d not found: ID does not exist" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567849 4782 scope.go:117] "RemoveContainer" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" Feb 02 11:34:04 crc kubenswrapper[4782]: E0202 11:34:04.568035 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7\": container with ID starting with 7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7 not found: ID does not exist" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.568055 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7"} err="failed to get container status \"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7\": rpc error: code = NotFound desc = could not find container \"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7\": container with ID starting with 7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7 not found: ID does not exist" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.832658 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" path="/var/lib/kubelet/pods/84d1feed-8d12-41b5-8606-4ac037256f14/volumes" Feb 02 11:34:13 crc kubenswrapper[4782]: I0202 11:34:13.822616 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:13 crc kubenswrapper[4782]: E0202 11:34:13.824022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:25 crc kubenswrapper[4782]: I0202 11:34:25.832293 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:25 crc kubenswrapper[4782]: E0202 11:34:25.833423 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:39 crc kubenswrapper[4782]: I0202 11:34:39.821407 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:39 crc kubenswrapper[4782]: E0202 11:34:39.822160 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:51 crc kubenswrapper[4782]: I0202 11:34:51.821313 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:51 crc kubenswrapper[4782]: E0202 11:34:51.822095 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:04 crc kubenswrapper[4782]: I0202 11:35:04.822008 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:04 crc kubenswrapper[4782]: E0202 11:35:04.823879 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:19 crc kubenswrapper[4782]: I0202 11:35:19.821018 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:19 crc kubenswrapper[4782]: E0202 11:35:19.821848 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:32 crc kubenswrapper[4782]: I0202 11:35:32.821112 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:32 crc kubenswrapper[4782]: E0202 11:35:32.822151 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.849781 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:35:35 crc kubenswrapper[4782]: E0202 11:35:35.850833 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-content" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.850850 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-content" Feb 02 11:35:35 crc kubenswrapper[4782]: E0202 11:35:35.850864 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.850871 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" Feb 02 11:35:35 crc kubenswrapper[4782]: E0202 11:35:35.850933 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-utilities" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.850944 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-utilities" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.851185 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.854278 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.860554 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.007147 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.007192 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.007252 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.109951 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110038 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110111 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110743 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110783 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.140722 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.177011 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.813983 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.329268 4782 generic.go:334] "Generic (PLEG): container finished" podID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" exitCode=0 Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.329449 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370"} Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.329688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerStarted","Data":"8908b2e444a0e08f2bac365aa9b5a6bf0976250e99f6e5024b4db88b333fe053"} Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.331761 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:35:39 crc kubenswrapper[4782]: I0202 11:35:39.349438 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerStarted","Data":"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378"} Feb 02 11:35:45 crc kubenswrapper[4782]: I0202 11:35:45.399003 4782 generic.go:334] "Generic (PLEG): container finished" podID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" exitCode=0 Feb 02 11:35:45 crc kubenswrapper[4782]: I0202 11:35:45.399105 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378"} Feb 02 11:35:46 crc kubenswrapper[4782]: I0202 11:35:46.410401 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerStarted","Data":"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd"} Feb 02 11:35:46 crc kubenswrapper[4782]: I0202 11:35:46.444949 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qpwft" podStartSLOduration=2.777708008 podStartE2EDuration="11.444924326s" podCreationTimestamp="2026-02-02 11:35:35 +0000 UTC" firstStartedPulling="2026-02-02 11:35:37.331443676 +0000 UTC m=+3417.215636392" lastFinishedPulling="2026-02-02 11:35:45.998659994 +0000 UTC m=+3425.882852710" observedRunningTime="2026-02-02 11:35:46.434062954 +0000 UTC m=+3426.318255680" watchObservedRunningTime="2026-02-02 11:35:46.444924326 +0000 UTC m=+3426.329117042" Feb 02 11:35:46 crc kubenswrapper[4782]: I0202 11:35:46.820831 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:46 crc kubenswrapper[4782]: E0202 11:35:46.821301 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:56 crc kubenswrapper[4782]: I0202 11:35:56.177849 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:56 crc kubenswrapper[4782]: I0202 11:35:56.178473 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:57 crc kubenswrapper[4782]: I0202 11:35:57.233363 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" probeResult="failure" output=< Feb 02 11:35:57 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:35:57 crc kubenswrapper[4782]: > Feb 02 11:35:59 crc kubenswrapper[4782]: I0202 11:35:59.822345 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:36:00 crc kubenswrapper[4782]: I0202 11:36:00.528875 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257"} Feb 02 11:36:07 crc kubenswrapper[4782]: I0202 11:36:07.222830 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" probeResult="failure" output=< Feb 02 11:36:07 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:36:07 crc kubenswrapper[4782]: > Feb 02 11:36:17 crc kubenswrapper[4782]: I0202 11:36:17.225391 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" probeResult="failure" output=< Feb 02 11:36:17 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:36:17 crc kubenswrapper[4782]: > Feb 02 11:36:26 crc kubenswrapper[4782]: I0202 11:36:26.223759 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:26 crc kubenswrapper[4782]: I0202 11:36:26.282320 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:26 crc kubenswrapper[4782]: I0202 11:36:26.458737 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:36:27 crc kubenswrapper[4782]: I0202 11:36:27.762424 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" containerID="cri-o://43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" gracePeriod=2 Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.294757 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.494977 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"cb841e7c-9074-4a9a-92e7-9e65398d733f\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.495186 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"cb841e7c-9074-4a9a-92e7-9e65398d733f\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.495241 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"cb841e7c-9074-4a9a-92e7-9e65398d733f\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.495952 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities" (OuterVolumeSpecName: "utilities") pod "cb841e7c-9074-4a9a-92e7-9e65398d733f" (UID: "cb841e7c-9074-4a9a-92e7-9e65398d733f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.597276 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.065380 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m" (OuterVolumeSpecName: "kube-api-access-rsk5m") pod "cb841e7c-9074-4a9a-92e7-9e65398d733f" (UID: "cb841e7c-9074-4a9a-92e7-9e65398d733f"). InnerVolumeSpecName "kube-api-access-rsk5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.082350 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.103467 4782 generic.go:334] "Generic (PLEG): container finished" podID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" exitCode=0 Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.103852 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.142055 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd"} Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.142122 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"8908b2e444a0e08f2bac365aa9b5a6bf0976250e99f6e5024b4db88b333fe053"} Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.142167 4782 scope.go:117] "RemoveContainer" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.181392 4782 scope.go:117] "RemoveContainer" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.235479 4782 scope.go:117] "RemoveContainer" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.256249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb841e7c-9074-4a9a-92e7-9e65398d733f" (UID: "cb841e7c-9074-4a9a-92e7-9e65398d733f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.272930 4782 scope.go:117] "RemoveContainer" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" Feb 02 11:36:29 crc kubenswrapper[4782]: E0202 11:36:29.273356 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd\": container with ID starting with 43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd not found: ID does not exist" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.273490 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd"} err="failed to get container status \"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd\": rpc error: code = NotFound desc = could not find container \"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd\": container with ID starting with 43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd not found: ID does not exist" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.273581 4782 scope.go:117] "RemoveContainer" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" Feb 02 11:36:29 crc kubenswrapper[4782]: E0202 11:36:29.274264 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378\": container with ID starting with 376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378 not found: ID does not exist" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.274306 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378"} err="failed to get container status \"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378\": rpc error: code = NotFound desc = could not find container \"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378\": container with ID starting with 376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378 not found: ID does not exist" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.274334 4782 scope.go:117] "RemoveContainer" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" Feb 02 11:36:29 crc kubenswrapper[4782]: E0202 11:36:29.274555 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370\": container with ID starting with 757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370 not found: ID does not exist" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.274583 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370"} err="failed to get container status \"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370\": rpc error: code = NotFound desc = could not find container \"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370\": container with ID starting with 757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370 not found: ID does not exist" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.295896 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.439590 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.449607 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:36:30 crc kubenswrapper[4782]: I0202 11:36:30.832255 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" path="/var/lib/kubelet/pods/cb841e7c-9074-4a9a-92e7-9e65398d733f/volumes" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.618221 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:43 crc kubenswrapper[4782]: E0202 11:37:43.619321 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-content" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619338 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-content" Feb 02 11:37:43 crc kubenswrapper[4782]: E0202 11:37:43.619377 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619386 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" Feb 02 11:37:43 crc kubenswrapper[4782]: E0202 11:37:43.619401 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-utilities" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619409 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-utilities" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619643 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.621371 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.646552 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.720840 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.720971 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.721033 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.822497 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.822580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.822680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.823168 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.823635 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.847889 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.944289 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:44 crc kubenswrapper[4782]: I0202 11:37:44.533749 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:44 crc kubenswrapper[4782]: I0202 11:37:44.765872 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393"} Feb 02 11:37:44 crc kubenswrapper[4782]: I0202 11:37:44.766285 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"54a25b0eb13a9b35776a71dc8e3b2edb4139cd89b91639b3e51b9a49d169da6d"} Feb 02 11:37:45 crc kubenswrapper[4782]: I0202 11:37:45.777384 4782 generic.go:334] "Generic (PLEG): container finished" podID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" exitCode=0 Feb 02 11:37:45 crc kubenswrapper[4782]: I0202 11:37:45.777428 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393"} Feb 02 11:37:46 crc kubenswrapper[4782]: I0202 11:37:46.790453 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1"} Feb 02 11:37:48 crc kubenswrapper[4782]: I0202 11:37:48.814741 4782 generic.go:334] "Generic (PLEG): container finished" podID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" exitCode=0 Feb 02 11:37:48 crc kubenswrapper[4782]: I0202 11:37:48.814814 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1"} Feb 02 11:37:49 crc kubenswrapper[4782]: I0202 11:37:49.840712 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd"} Feb 02 11:37:49 crc kubenswrapper[4782]: I0202 11:37:49.869197 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86gp4" podStartSLOduration=3.430284627 podStartE2EDuration="6.869173441s" podCreationTimestamp="2026-02-02 11:37:43 +0000 UTC" firstStartedPulling="2026-02-02 11:37:45.78089126 +0000 UTC m=+3545.665083976" lastFinishedPulling="2026-02-02 11:37:49.219780074 +0000 UTC m=+3549.103972790" observedRunningTime="2026-02-02 11:37:49.864023433 +0000 UTC m=+3549.748216169" watchObservedRunningTime="2026-02-02 11:37:49.869173441 +0000 UTC m=+3549.753366157" Feb 02 11:37:53 crc kubenswrapper[4782]: I0202 11:37:53.944713 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:53 crc kubenswrapper[4782]: I0202 11:37:53.945338 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:54 crc kubenswrapper[4782]: I0202 11:37:54.002767 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:54 crc kubenswrapper[4782]: I0202 11:37:54.940183 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:54 crc kubenswrapper[4782]: I0202 11:37:54.991450 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:56 crc kubenswrapper[4782]: I0202 11:37:56.900151 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86gp4" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" containerID="cri-o://c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" gracePeriod=2 Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.466821 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.515859 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"22bd520b-44a1-48ea-8b4e-dc5aff206551\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.515990 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"22bd520b-44a1-48ea-8b4e-dc5aff206551\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.516158 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"22bd520b-44a1-48ea-8b4e-dc5aff206551\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.518858 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities" (OuterVolumeSpecName: "utilities") pod "22bd520b-44a1-48ea-8b4e-dc5aff206551" (UID: "22bd520b-44a1-48ea-8b4e-dc5aff206551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.550916 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv" (OuterVolumeSpecName: "kube-api-access-xpnzv") pod "22bd520b-44a1-48ea-8b4e-dc5aff206551" (UID: "22bd520b-44a1-48ea-8b4e-dc5aff206551"). InnerVolumeSpecName "kube-api-access-xpnzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.591500 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22bd520b-44a1-48ea-8b4e-dc5aff206551" (UID: "22bd520b-44a1-48ea-8b4e-dc5aff206551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.618568 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.618606 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.618617 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910130 4782 generic.go:334] "Generic (PLEG): container finished" podID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" exitCode=0 Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910176 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd"} Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910215 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"54a25b0eb13a9b35776a71dc8e3b2edb4139cd89b91639b3e51b9a49d169da6d"} Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910186 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910243 4782 scope.go:117] "RemoveContainer" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.936914 4782 scope.go:117] "RemoveContainer" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.971980 4782 scope.go:117] "RemoveContainer" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.975051 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.986591 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.021032 4782 scope.go:117] "RemoveContainer" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" Feb 02 11:37:58 crc kubenswrapper[4782]: E0202 11:37:58.024060 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd\": container with ID starting with c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd not found: ID does not exist" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.024213 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd"} err="failed to get container status \"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd\": rpc error: code = NotFound desc = could not find container \"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd\": container with ID starting with c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd not found: ID does not exist" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.024377 4782 scope.go:117] "RemoveContainer" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" Feb 02 11:37:58 crc kubenswrapper[4782]: E0202 11:37:58.024996 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1\": container with ID starting with c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1 not found: ID does not exist" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.025034 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1"} err="failed to get container status \"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1\": rpc error: code = NotFound desc = could not find container \"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1\": container with ID starting with c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1 not found: ID does not exist" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.025063 4782 scope.go:117] "RemoveContainer" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" Feb 02 11:37:58 crc kubenswrapper[4782]: E0202 11:37:58.025540 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393\": container with ID starting with 008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393 not found: ID does not exist" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.025619 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393"} err="failed to get container status \"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393\": rpc error: code = NotFound desc = could not find container \"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393\": container with ID starting with 008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393 not found: ID does not exist" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.844321 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" path="/var/lib/kubelet/pods/22bd520b-44a1-48ea-8b4e-dc5aff206551/volumes" Feb 02 11:38:22 crc kubenswrapper[4782]: I0202 11:38:22.951590 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:38:22 crc kubenswrapper[4782]: I0202 11:38:22.952317 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:38:52 crc kubenswrapper[4782]: I0202 11:38:52.951567 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:38:52 crc kubenswrapper[4782]: I0202 11:38:52.952848 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.951295 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.951983 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.952040 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.952906 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.952965 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257" gracePeriod=600 Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724250 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257" exitCode=0 Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257"} Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724840 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6"} Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724858 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.049693 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.061480 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.070831 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.079071 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.833369 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" path="/var/lib/kubelet/pods/7260512c-a397-4b18-ab4d-a97e7dbf50d9/volumes" Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.835409 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" path="/var/lib/kubelet/pods/d9a2fa32-7949-4dbe-8e51-49627e08f051/volumes" Feb 02 11:39:55 crc kubenswrapper[4782]: I0202 11:39:55.098800 4782 scope.go:117] "RemoveContainer" containerID="18235f2d52d1acb53abcc5d69239ea08135f49af22250cec7c915e6b6af27b05" Feb 02 11:39:55 crc kubenswrapper[4782]: I0202 11:39:55.155899 4782 scope.go:117] "RemoveContainer" containerID="22ee95619a6ae6669166c5388f7644833a8f50918632409a8660abf992c5c5da" Feb 02 11:40:44 crc kubenswrapper[4782]: I0202 11:40:44.040231 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:40:44 crc kubenswrapper[4782]: I0202 11:40:44.051350 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:40:44 crc kubenswrapper[4782]: I0202 11:40:44.833447 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" path="/var/lib/kubelet/pods/f45fc51f-4efe-4cbf-9539-d858ac3c2e73/volumes" Feb 02 11:40:55 crc kubenswrapper[4782]: I0202 11:40:55.265309 4782 scope.go:117] "RemoveContainer" containerID="c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2" Feb 02 11:41:52 crc kubenswrapper[4782]: I0202 11:41:52.950950 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:41:52 crc kubenswrapper[4782]: I0202 11:41:52.951386 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:42:22 crc kubenswrapper[4782]: I0202 11:42:22.951126 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:42:22 crc kubenswrapper[4782]: I0202 11:42:22.951692 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.951972 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.953022 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.953410 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.954800 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.954876 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" gracePeriod=600 Feb 02 11:42:53 crc kubenswrapper[4782]: E0202 11:42:53.096002 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.520350 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" exitCode=0 Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.520419 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6"} Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.520991 4782 scope.go:117] "RemoveContainer" containerID="e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257" Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.521681 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:42:53 crc kubenswrapper[4782]: E0202 11:42:53.521983 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:06 crc kubenswrapper[4782]: I0202 11:43:06.821978 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:06 crc kubenswrapper[4782]: E0202 11:43:06.822803 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:19 crc kubenswrapper[4782]: I0202 11:43:19.821380 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:19 crc kubenswrapper[4782]: E0202 11:43:19.823048 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:31 crc kubenswrapper[4782]: I0202 11:43:31.821277 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:31 crc kubenswrapper[4782]: E0202 11:43:31.821942 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:44 crc kubenswrapper[4782]: I0202 11:43:44.821663 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:44 crc kubenswrapper[4782]: E0202 11:43:44.822520 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:56 crc kubenswrapper[4782]: I0202 11:43:56.820922 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:56 crc kubenswrapper[4782]: E0202 11:43:56.821786 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:08 crc kubenswrapper[4782]: I0202 11:44:08.827979 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:08 crc kubenswrapper[4782]: E0202 11:44:08.828822 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:21 crc kubenswrapper[4782]: I0202 11:44:21.822481 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:21 crc kubenswrapper[4782]: E0202 11:44:21.823570 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:34 crc kubenswrapper[4782]: I0202 11:44:34.821553 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:34 crc kubenswrapper[4782]: E0202 11:44:34.822499 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.998280 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:36 crc kubenswrapper[4782]: E0202 11:44:36.999051 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-utilities" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999064 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-utilities" Feb 02 11:44:36 crc kubenswrapper[4782]: E0202 11:44:36.999090 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-content" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999096 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-content" Feb 02 11:44:36 crc kubenswrapper[4782]: E0202 11:44:36.999103 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999109 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999287 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.000773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.018963 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.114347 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.114404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.114487 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.216684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.216724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.216762 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.217262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.217350 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.239087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.319489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.921959 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.423165 4782 generic.go:334] "Generic (PLEG): container finished" podID="6102134b-c682-49ac-abbb-1303c639d46b" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" exitCode=0 Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.423261 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70"} Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.423450 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerStarted","Data":"ab539b6f75c611534c7284d3bea63be36fdfd3b06fe7625af9cda86dd231d758"} Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.425423 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:44:40 crc kubenswrapper[4782]: I0202 11:44:40.440111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerStarted","Data":"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a"} Feb 02 11:44:41 crc kubenswrapper[4782]: I0202 11:44:41.461126 4782 generic.go:334] "Generic (PLEG): container finished" podID="6102134b-c682-49ac-abbb-1303c639d46b" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" exitCode=0 Feb 02 11:44:41 crc kubenswrapper[4782]: I0202 11:44:41.461445 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a"} Feb 02 11:44:42 crc kubenswrapper[4782]: I0202 11:44:42.471684 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerStarted","Data":"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf"} Feb 02 11:44:42 crc kubenswrapper[4782]: I0202 11:44:42.496102 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xj6cn" podStartSLOduration=3.0859154 podStartE2EDuration="6.496081951s" podCreationTimestamp="2026-02-02 11:44:36 +0000 UTC" firstStartedPulling="2026-02-02 11:44:38.425136591 +0000 UTC m=+3958.309329297" lastFinishedPulling="2026-02-02 11:44:41.835303132 +0000 UTC m=+3961.719495848" observedRunningTime="2026-02-02 11:44:42.492252051 +0000 UTC m=+3962.376444787" watchObservedRunningTime="2026-02-02 11:44:42.496081951 +0000 UTC m=+3962.380274667" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.320797 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.321437 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.381889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.565405 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.630191 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.821977 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:47 crc kubenswrapper[4782]: E0202 11:44:47.822229 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:49 crc kubenswrapper[4782]: I0202 11:44:49.528329 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xj6cn" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" containerID="cri-o://4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" gracePeriod=2 Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.239863 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.351013 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"6102134b-c682-49ac-abbb-1303c639d46b\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.351101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"6102134b-c682-49ac-abbb-1303c639d46b\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.351172 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"6102134b-c682-49ac-abbb-1303c639d46b\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.352174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities" (OuterVolumeSpecName: "utilities") pod "6102134b-c682-49ac-abbb-1303c639d46b" (UID: "6102134b-c682-49ac-abbb-1303c639d46b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.356880 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg" (OuterVolumeSpecName: "kube-api-access-nv7bg") pod "6102134b-c682-49ac-abbb-1303c639d46b" (UID: "6102134b-c682-49ac-abbb-1303c639d46b"). InnerVolumeSpecName "kube-api-access-nv7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.454109 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") on node \"crc\" DevicePath \"\"" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.454152 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.540293 4782 generic.go:334] "Generic (PLEG): container finished" podID="6102134b-c682-49ac-abbb-1303c639d46b" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" exitCode=0 Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.540356 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.540371 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf"} Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.541884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"ab539b6f75c611534c7284d3bea63be36fdfd3b06fe7625af9cda86dd231d758"} Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.541916 4782 scope.go:117] "RemoveContainer" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.567795 4782 scope.go:117] "RemoveContainer" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.587586 4782 scope.go:117] "RemoveContainer" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.634027 4782 scope.go:117] "RemoveContainer" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" Feb 02 11:44:50 crc kubenswrapper[4782]: E0202 11:44:50.635169 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf\": container with ID starting with 4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf not found: ID does not exist" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635273 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf"} err="failed to get container status \"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf\": rpc error: code = NotFound desc = could not find container \"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf\": container with ID starting with 4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf not found: ID does not exist" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635353 4782 scope.go:117] "RemoveContainer" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" Feb 02 11:44:50 crc kubenswrapper[4782]: E0202 11:44:50.635798 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a\": container with ID starting with 61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a not found: ID does not exist" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635874 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a"} err="failed to get container status \"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a\": rpc error: code = NotFound desc = could not find container \"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a\": container with ID starting with 61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a not found: ID does not exist" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635936 4782 scope.go:117] "RemoveContainer" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" Feb 02 11:44:50 crc kubenswrapper[4782]: E0202 11:44:50.636406 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70\": container with ID starting with 0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70 not found: ID does not exist" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.636493 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70"} err="failed to get container status \"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70\": rpc error: code = NotFound desc = could not find container \"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70\": container with ID starting with 0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70 not found: ID does not exist" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.683236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6102134b-c682-49ac-abbb-1303c639d46b" (UID: "6102134b-c682-49ac-abbb-1303c639d46b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.760939 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.865298 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.874724 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:52 crc kubenswrapper[4782]: I0202 11:44:52.829949 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6102134b-c682-49ac-abbb-1303c639d46b" path="/var/lib/kubelet/pods/6102134b-c682-49ac-abbb-1303c639d46b/volumes" Feb 02 11:44:58 crc kubenswrapper[4782]: I0202 11:44:58.822209 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:58 crc kubenswrapper[4782]: E0202 11:44:58.823325 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.206608 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh"] Feb 02 11:45:00 crc kubenswrapper[4782]: E0202 11:45:00.207472 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207489 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4782]: E0202 11:45:00.207507 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207514 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4782]: E0202 11:45:00.207534 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207542 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207798 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.208656 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.217583 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh"] Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.219217 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.220565 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.257499 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.257564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.257629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.359757 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.360113 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.361357 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.361237 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.367353 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.379332 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.545761 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.049873 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh"] Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.635378 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerStarted","Data":"07b7c95a2fa05599c5b60c9b1ce739b2f37878424c049f33791b40d9ef8205ed"} Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.635435 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerStarted","Data":"49c5d4d36480270abd39a76802588a8854bb59f652df61c4eb71175652142494"} Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.664558 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" podStartSLOduration=1.664531824 podStartE2EDuration="1.664531824s" podCreationTimestamp="2026-02-02 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:45:01.655427832 +0000 UTC m=+3981.539620548" watchObservedRunningTime="2026-02-02 11:45:01.664531824 +0000 UTC m=+3981.548724540" Feb 02 11:45:02 crc kubenswrapper[4782]: I0202 11:45:02.646167 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerDied","Data":"07b7c95a2fa05599c5b60c9b1ce739b2f37878424c049f33791b40d9ef8205ed"} Feb 02 11:45:02 crc kubenswrapper[4782]: I0202 11:45:02.646035 4782 generic.go:334] "Generic (PLEG): container finished" podID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerID="07b7c95a2fa05599c5b60c9b1ce739b2f37878424c049f33791b40d9ef8205ed" exitCode=0 Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.304674 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.349854 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"469bd464-b4f1-401d-be7b-da5ac0b089d2\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.349938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"469bd464-b4f1-401d-be7b-da5ac0b089d2\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.350042 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"469bd464-b4f1-401d-be7b-da5ac0b089d2\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.351322 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume" (OuterVolumeSpecName: "config-volume") pod "469bd464-b4f1-401d-be7b-da5ac0b089d2" (UID: "469bd464-b4f1-401d-be7b-da5ac0b089d2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.358945 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "469bd464-b4f1-401d-be7b-da5ac0b089d2" (UID: "469bd464-b4f1-401d-be7b-da5ac0b089d2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.359272 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n" (OuterVolumeSpecName: "kube-api-access-jll4n") pod "469bd464-b4f1-401d-be7b-da5ac0b089d2" (UID: "469bd464-b4f1-401d-be7b-da5ac0b089d2"). InnerVolumeSpecName "kube-api-access-jll4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.451955 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.451998 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.452009 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.664613 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerDied","Data":"49c5d4d36480270abd39a76802588a8854bb59f652df61c4eb71175652142494"} Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.665021 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49c5d4d36480270abd39a76802588a8854bb59f652df61c4eb71175652142494" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.664684 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.741269 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.749846 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.835537 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" path="/var/lib/kubelet/pods/62ac376d-42fd-424f-a1bf-281bd9c9d31f/volumes" Feb 02 11:45:11 crc kubenswrapper[4782]: I0202 11:45:11.821262 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:11 crc kubenswrapper[4782]: E0202 11:45:11.822224 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:23 crc kubenswrapper[4782]: I0202 11:45:23.821135 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:23 crc kubenswrapper[4782]: E0202 11:45:23.821827 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:27 crc kubenswrapper[4782]: I0202 11:45:27.869133 4782 generic.go:334] "Generic (PLEG): container finished" podID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerID="762b4b5b8e241a2a3f60ed6176e6adc0554b048edbcf9782fa78026e47e66f14" exitCode=0 Feb 02 11:45:27 crc kubenswrapper[4782]: I0202 11:45:27.869215 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerDied","Data":"762b4b5b8e241a2a3f60ed6176e6adc0554b048edbcf9782fa78026e47e66f14"} Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.325470 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357721 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357859 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358022 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358102 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358183 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.370460 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data" (OuterVolumeSpecName: "config-data") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.371040 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.380376 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.384176 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.394535 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9" (OuterVolumeSpecName: "kube-api-access-r8gp9") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "kube-api-access-r8gp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.400299 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.406951 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.409253 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.438435 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.460854 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.460905 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463782 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463820 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463835 4782 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463847 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463863 4782 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463874 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463887 4782 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.483960 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.565111 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.886405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerDied","Data":"165e18b6fa145b8da7ea6159f1baabb2c9b6bfa2ecbd382cecfe714965ca36c1"} Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.886442 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165e18b6fa145b8da7ea6159f1baabb2c9b6bfa2ecbd382cecfe714965ca36c1" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.886471 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:45:35 crc kubenswrapper[4782]: I0202 11:45:35.822329 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:35 crc kubenswrapper[4782]: E0202 11:45:35.823109 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.132872 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 11:45:38 crc kubenswrapper[4782]: E0202 11:45:38.133673 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerName="collect-profiles" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133691 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerName="collect-profiles" Feb 02 11:45:38 crc kubenswrapper[4782]: E0202 11:45:38.133712 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerName="tempest-tests-tempest-tests-runner" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133722 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerName="tempest-tests-tempest-tests-runner" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133942 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerName="collect-profiles" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133964 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerName="tempest-tests-tempest-tests-runner" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.134751 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.137758 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvl62" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.143822 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.238270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.238411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9t94\" (UniqueName: \"kubernetes.io/projected/0a460d0d-7c4a-473e-9df8-ca1b1979cb25-kube-api-access-j9t94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.339501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9t94\" (UniqueName: \"kubernetes.io/projected/0a460d0d-7c4a-473e-9df8-ca1b1979cb25-kube-api-access-j9t94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.339687 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.340210 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.361726 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9t94\" (UniqueName: \"kubernetes.io/projected/0a460d0d-7c4a-473e-9df8-ca1b1979cb25-kube-api-access-j9t94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.365382 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.465986 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.941110 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.983755 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0a460d0d-7c4a-473e-9df8-ca1b1979cb25","Type":"ContainerStarted","Data":"1a87b80a2722e3883477d932a43fa5d226c4d3362eed3de1f3bdaabe855b8647"} Feb 02 11:45:41 crc kubenswrapper[4782]: I0202 11:45:41.005343 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0a460d0d-7c4a-473e-9df8-ca1b1979cb25","Type":"ContainerStarted","Data":"4bfeaf59f150cc47de2c37f54aa1da64348bb0f6d81b685a8aefaf8621e99b95"} Feb 02 11:45:41 crc kubenswrapper[4782]: I0202 11:45:41.024482 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.536651175 podStartE2EDuration="3.024465047s" podCreationTimestamp="2026-02-02 11:45:38 +0000 UTC" firstStartedPulling="2026-02-02 11:45:38.951582995 +0000 UTC m=+4018.835775711" lastFinishedPulling="2026-02-02 11:45:40.439396867 +0000 UTC m=+4020.323589583" observedRunningTime="2026-02-02 11:45:41.021486701 +0000 UTC m=+4020.905679427" watchObservedRunningTime="2026-02-02 11:45:41.024465047 +0000 UTC m=+4020.908657763" Feb 02 11:45:48 crc kubenswrapper[4782]: I0202 11:45:48.820844 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:48 crc kubenswrapper[4782]: E0202 11:45:48.821592 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:55 crc kubenswrapper[4782]: I0202 11:45:55.428299 4782 scope.go:117] "RemoveContainer" containerID="a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634" Feb 02 11:46:02 crc kubenswrapper[4782]: I0202 11:46:02.821231 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:02 crc kubenswrapper[4782]: E0202 11:46:02.822060 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.098391 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.100806 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.111128 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.118969 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jxb7c"/"kube-root-ca.crt" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.119275 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jxb7c"/"openshift-service-ca.crt" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.123461 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jxb7c"/"default-dockercfg-l85z4" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.215903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.216081 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.318510 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.318695 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.319169 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.346185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.422241 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.907850 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:46:06 crc kubenswrapper[4782]: I0202 11:46:06.226888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerStarted","Data":"350a73414a44139cf56d3f302a04e19f6fa7172fa0d1cce7dcf590751826e7f0"} Feb 02 11:46:12 crc kubenswrapper[4782]: I0202 11:46:12.307007 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerStarted","Data":"834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d"} Feb 02 11:46:12 crc kubenswrapper[4782]: I0202 11:46:12.307633 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerStarted","Data":"6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394"} Feb 02 11:46:12 crc kubenswrapper[4782]: I0202 11:46:12.333247 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" podStartSLOduration=1.9516449420000002 podStartE2EDuration="7.333218773s" podCreationTimestamp="2026-02-02 11:46:05 +0000 UTC" firstStartedPulling="2026-02-02 11:46:05.915266142 +0000 UTC m=+4045.799458858" lastFinishedPulling="2026-02-02 11:46:11.296839973 +0000 UTC m=+4051.181032689" observedRunningTime="2026-02-02 11:46:12.321949628 +0000 UTC m=+4052.206142364" watchObservedRunningTime="2026-02-02 11:46:12.333218773 +0000 UTC m=+4052.217411489" Feb 02 11:46:13 crc kubenswrapper[4782]: I0202 11:46:13.820921 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:13 crc kubenswrapper[4782]: E0202 11:46:13.821541 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:16 crc kubenswrapper[4782]: E0202 11:46:16.863149 4782 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.147:57964->38.102.83.147:40373: read tcp 38.102.83.147:57964->38.102.83.147:40373: read: connection reset by peer Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.171808 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-phbvf"] Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.176858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.237448 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.238160 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.340535 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.340703 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.341102 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.371798 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.503947 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:20 crc kubenswrapper[4782]: I0202 11:46:20.372346 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" event={"ID":"7ab47ea7-89b6-4fb1-b663-5e4e26a19975","Type":"ContainerStarted","Data":"dec97e17aa8cc0816aab9519446752a1eb1800085d33b772d42030a1969af361"} Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.582444 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.585926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.627549 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.700856 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.700930 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.701120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.802817 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.802892 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.803142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.805235 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.807936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.836746 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.907153 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:27 crc kubenswrapper[4782]: I0202 11:46:27.821023 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:27 crc kubenswrapper[4782]: E0202 11:46:27.821816 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.257491 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:46:34 crc kubenswrapper[4782]: W0202 11:46:34.263480 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a2322e_7ca2_41f9_90af_bca1a4a7c157.slice/crio-c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3 WatchSource:0}: Error finding container c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3: Status 404 returned error can't find the container with id c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3 Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.532277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" event={"ID":"7ab47ea7-89b6-4fb1-b663-5e4e26a19975","Type":"ContainerStarted","Data":"703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8"} Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.534497 4782 generic.go:334] "Generic (PLEG): container finished" podID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerID="7a907be3c666b2b37f0f47822c1f99e06b350871caf9dd81a1bd3196d758a463" exitCode=0 Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.534563 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"7a907be3c666b2b37f0f47822c1f99e06b350871caf9dd81a1bd3196d758a463"} Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.534606 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerStarted","Data":"c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3"} Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.554591 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" podStartSLOduration=1.430717584 podStartE2EDuration="15.554569485s" podCreationTimestamp="2026-02-02 11:46:19 +0000 UTC" firstStartedPulling="2026-02-02 11:46:19.54877563 +0000 UTC m=+4059.432968346" lastFinishedPulling="2026-02-02 11:46:33.672627521 +0000 UTC m=+4073.556820247" observedRunningTime="2026-02-02 11:46:34.545630898 +0000 UTC m=+4074.429823624" watchObservedRunningTime="2026-02-02 11:46:34.554569485 +0000 UTC m=+4074.438762201" Feb 02 11:46:36 crc kubenswrapper[4782]: I0202 11:46:36.554965 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerStarted","Data":"1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243"} Feb 02 11:46:42 crc kubenswrapper[4782]: I0202 11:46:42.619455 4782 generic.go:334] "Generic (PLEG): container finished" podID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerID="1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243" exitCode=0 Feb 02 11:46:42 crc kubenswrapper[4782]: I0202 11:46:42.619535 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243"} Feb 02 11:46:42 crc kubenswrapper[4782]: I0202 11:46:42.822552 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:42 crc kubenswrapper[4782]: E0202 11:46:42.822877 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:49 crc kubenswrapper[4782]: I0202 11:46:49.701501 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerStarted","Data":"1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8"} Feb 02 11:46:49 crc kubenswrapper[4782]: I0202 11:46:49.728226 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ghfkv" podStartSLOduration=10.704945434999999 podStartE2EDuration="24.728200579s" podCreationTimestamp="2026-02-02 11:46:25 +0000 UTC" firstStartedPulling="2026-02-02 11:46:34.540236903 +0000 UTC m=+4074.424429619" lastFinishedPulling="2026-02-02 11:46:48.563492047 +0000 UTC m=+4088.447684763" observedRunningTime="2026-02-02 11:46:49.724228265 +0000 UTC m=+4089.608420991" watchObservedRunningTime="2026-02-02 11:46:49.728200579 +0000 UTC m=+4089.612393305" Feb 02 11:46:54 crc kubenswrapper[4782]: I0202 11:46:54.822485 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:54 crc kubenswrapper[4782]: E0202 11:46:54.824004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:55 crc kubenswrapper[4782]: I0202 11:46:55.909935 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:55 crc kubenswrapper[4782]: I0202 11:46:55.910329 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:57 crc kubenswrapper[4782]: I0202 11:46:57.334473 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:46:57 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:46:57 crc kubenswrapper[4782]: > Feb 02 11:47:06 crc kubenswrapper[4782]: I0202 11:47:06.984599 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:06 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:06 crc kubenswrapper[4782]: > Feb 02 11:47:09 crc kubenswrapper[4782]: I0202 11:47:09.821064 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:09 crc kubenswrapper[4782]: E0202 11:47:09.821924 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.531941 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.534241 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.555873 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.627203 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.627269 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.627377 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729480 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729610 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729840 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.730112 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.764461 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.856309 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.566073 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.973702 4782 generic.go:334] "Generic (PLEG): container finished" podID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" exitCode=0 Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.973871 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f"} Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.974001 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerStarted","Data":"b170233de7d0c6ed8e5161b038fbec63ac00d1c9bc9cf57ca0a7cd8f776a230b"} Feb 02 11:47:16 crc kubenswrapper[4782]: I0202 11:47:16.986532 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:16 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:16 crc kubenswrapper[4782]: > Feb 02 11:47:17 crc kubenswrapper[4782]: I0202 11:47:17.994928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerStarted","Data":"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc"} Feb 02 11:47:20 crc kubenswrapper[4782]: I0202 11:47:20.024297 4782 generic.go:334] "Generic (PLEG): container finished" podID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" exitCode=0 Feb 02 11:47:20 crc kubenswrapper[4782]: I0202 11:47:20.024490 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc"} Feb 02 11:47:20 crc kubenswrapper[4782]: I0202 11:47:20.830378 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:20 crc kubenswrapper[4782]: E0202 11:47:20.831002 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:22 crc kubenswrapper[4782]: I0202 11:47:22.061792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerStarted","Data":"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5"} Feb 02 11:47:22 crc kubenswrapper[4782]: I0202 11:47:22.110477 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hxxbq" podStartSLOduration=3.298884194 podStartE2EDuration="8.11044398s" podCreationTimestamp="2026-02-02 11:47:14 +0000 UTC" firstStartedPulling="2026-02-02 11:47:15.975810354 +0000 UTC m=+4115.860003070" lastFinishedPulling="2026-02-02 11:47:20.78737014 +0000 UTC m=+4120.671562856" observedRunningTime="2026-02-02 11:47:22.095618333 +0000 UTC m=+4121.979811069" watchObservedRunningTime="2026-02-02 11:47:22.11044398 +0000 UTC m=+4121.994636696" Feb 02 11:47:24 crc kubenswrapper[4782]: I0202 11:47:24.856867 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:24 crc kubenswrapper[4782]: I0202 11:47:24.857231 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:25 crc kubenswrapper[4782]: I0202 11:47:25.927231 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hxxbq" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:25 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:25 crc kubenswrapper[4782]: > Feb 02 11:47:26 crc kubenswrapper[4782]: I0202 11:47:26.963560 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:26 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:26 crc kubenswrapper[4782]: > Feb 02 11:47:30 crc kubenswrapper[4782]: I0202 11:47:30.180524 4782 generic.go:334] "Generic (PLEG): container finished" podID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerID="703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8" exitCode=0 Feb 02 11:47:30 crc kubenswrapper[4782]: I0202 11:47:30.181076 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" event={"ID":"7ab47ea7-89b6-4fb1-b663-5e4e26a19975","Type":"ContainerDied","Data":"703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8"} Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.320206 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.358086 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-phbvf"] Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.366425 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-phbvf"] Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.420738 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.421055 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.421145 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host" (OuterVolumeSpecName: "host") pod "7ab47ea7-89b6-4fb1-b663-5e4e26a19975" (UID: "7ab47ea7-89b6-4fb1-b663-5e4e26a19975"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.421666 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.427779 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj" (OuterVolumeSpecName: "kube-api-access-dhtrj") pod "7ab47ea7-89b6-4fb1-b663-5e4e26a19975" (UID: "7ab47ea7-89b6-4fb1-b663-5e4e26a19975"). InnerVolumeSpecName "kube-api-access-dhtrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.523472 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.821518 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:31 crc kubenswrapper[4782]: E0202 11:47:31.822107 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.204026 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec97e17aa8cc0816aab9519446752a1eb1800085d33b772d42030a1969af361" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.204134 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.593193 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-68p8l"] Feb 02 11:47:32 crc kubenswrapper[4782]: E0202 11:47:32.593682 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerName="container-00" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.593699 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerName="container-00" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.593933 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerName="container-00" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.594770 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.647882 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.648186 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.750493 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.750622 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.750823 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.777885 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.833204 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" path="/var/lib/kubelet/pods/7ab47ea7-89b6-4fb1-b663-5e4e26a19975/volumes" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.918822 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: W0202 11:47:32.961003 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8d9241_1979_4510_b308_3fb134dc12fe.slice/crio-86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa WatchSource:0}: Error finding container 86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa: Status 404 returned error can't find the container with id 86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa Feb 02 11:47:33 crc kubenswrapper[4782]: I0202 11:47:33.213970 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" event={"ID":"1b8d9241-1979-4510-b308-3fb134dc12fe","Type":"ContainerStarted","Data":"c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292"} Feb 02 11:47:33 crc kubenswrapper[4782]: I0202 11:47:33.214306 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" event={"ID":"1b8d9241-1979-4510-b308-3fb134dc12fe","Type":"ContainerStarted","Data":"86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa"} Feb 02 11:47:33 crc kubenswrapper[4782]: I0202 11:47:33.234530 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" podStartSLOduration=1.234506191 podStartE2EDuration="1.234506191s" podCreationTimestamp="2026-02-02 11:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:47:33.227303294 +0000 UTC m=+4133.111496020" watchObservedRunningTime="2026-02-02 11:47:33.234506191 +0000 UTC m=+4133.118698907" Feb 02 11:47:34 crc kubenswrapper[4782]: I0202 11:47:34.230245 4782 generic.go:334] "Generic (PLEG): container finished" podID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerID="c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292" exitCode=0 Feb 02 11:47:34 crc kubenswrapper[4782]: I0202 11:47:34.230284 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" event={"ID":"1b8d9241-1979-4510-b308-3fb134dc12fe","Type":"ContainerDied","Data":"c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292"} Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.335402 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.366591 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-68p8l"] Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.381579 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-68p8l"] Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.407940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"1b8d9241-1979-4510-b308-3fb134dc12fe\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.408040 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host" (OuterVolumeSpecName: "host") pod "1b8d9241-1979-4510-b308-3fb134dc12fe" (UID: "1b8d9241-1979-4510-b308-3fb134dc12fe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.408171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"1b8d9241-1979-4510-b308-3fb134dc12fe\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.408730 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.415146 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s" (OuterVolumeSpecName: "kube-api-access-jwk2s") pod "1b8d9241-1979-4510-b308-3fb134dc12fe" (UID: "1b8d9241-1979-4510-b308-3fb134dc12fe"). InnerVolumeSpecName "kube-api-access-jwk2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.510965 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.914143 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hxxbq" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:35 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:35 crc kubenswrapper[4782]: > Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.004006 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.079073 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.246463 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.246476 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.253736 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.782618 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-4m2mp"] Feb 02 11:47:36 crc kubenswrapper[4782]: E0202 11:47:36.784847 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerName="container-00" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.784980 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerName="container-00" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.785336 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerName="container-00" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.786281 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.832428 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" path="/var/lib/kubelet/pods/1b8d9241-1979-4510-b308-3fb134dc12fe/volumes" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.837110 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.837346 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.939477 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.939532 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.940712 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:37 crc kubenswrapper[4782]: I0202 11:47:37.254339 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" containerID="cri-o://1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8" gracePeriod=2 Feb 02 11:47:37 crc kubenswrapper[4782]: I0202 11:47:37.466693 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:37 crc kubenswrapper[4782]: I0202 11:47:37.705006 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.348618 4782 generic.go:334] "Generic (PLEG): container finished" podID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerID="1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8" exitCode=0 Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.348765 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8"} Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.362767 4782 generic.go:334] "Generic (PLEG): container finished" podID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerID="c9a3d75498664651067c9e5dff908f0096c2e0acbb1581811fb18a849226abca" exitCode=0 Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.362806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" event={"ID":"b4cb3d8a-7317-496a-9944-ecca54fd2e5c","Type":"ContainerDied","Data":"c9a3d75498664651067c9e5dff908f0096c2e0acbb1581811fb18a849226abca"} Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.362831 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" event={"ID":"b4cb3d8a-7317-496a-9944-ecca54fd2e5c","Type":"ContainerStarted","Data":"a8bb992f7f3bded0ed4dd05e59e214691573d431a9a3ac0ef86213ff790227cc"} Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.427815 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-4m2mp"] Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.444211 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-4m2mp"] Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.673266 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.838881 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.839023 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.839053 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.840271 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities" (OuterVolumeSpecName: "utilities") pod "46a2322e-7ca2-41f9-90af-bca1a4a7c157" (UID: "46a2322e-7ca2-41f9-90af-bca1a4a7c157"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.847347 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs" (OuterVolumeSpecName: "kube-api-access-4jnrs") pod "46a2322e-7ca2-41f9-90af-bca1a4a7c157" (UID: "46a2322e-7ca2-41f9-90af-bca1a4a7c157"). InnerVolumeSpecName "kube-api-access-4jnrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.943502 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.943800 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.973394 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46a2322e-7ca2-41f9-90af-bca1a4a7c157" (UID: "46a2322e-7ca2-41f9-90af-bca1a4a7c157"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.045401 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.374860 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3"} Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.374925 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.374951 4782 scope.go:117] "RemoveContainer" containerID="1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.414800 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.424347 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.488814 4782 scope.go:117] "RemoveContainer" containerID="1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.597554 4782 scope.go:117] "RemoveContainer" containerID="7a907be3c666b2b37f0f47822c1f99e06b350871caf9dd81a1bd3196d758a463" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.628628 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.757216 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.757490 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.757556 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host" (OuterVolumeSpecName: "host") pod "b4cb3d8a-7317-496a-9944-ecca54fd2e5c" (UID: "b4cb3d8a-7317-496a-9944-ecca54fd2e5c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.758070 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.762103 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq" (OuterVolumeSpecName: "kube-api-access-tmvgq") pod "b4cb3d8a-7317-496a-9944-ecca54fd2e5c" (UID: "b4cb3d8a-7317-496a-9944-ecca54fd2e5c"). InnerVolumeSpecName "kube-api-access-tmvgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.859147 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.384278 4782 scope.go:117] "RemoveContainer" containerID="c9a3d75498664651067c9e5dff908f0096c2e0acbb1581811fb18a849226abca" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.384736 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.834854 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" path="/var/lib/kubelet/pods/46a2322e-7ca2-41f9-90af-bca1a4a7c157/volumes" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.836022 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" path="/var/lib/kubelet/pods/b4cb3d8a-7317-496a-9944-ecca54fd2e5c/volumes" Feb 02 11:47:44 crc kubenswrapper[4782]: I0202 11:47:44.917862 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:44 crc kubenswrapper[4782]: I0202 11:47:44.972094 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:45 crc kubenswrapper[4782]: I0202 11:47:45.160832 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:45 crc kubenswrapper[4782]: I0202 11:47:45.820859 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:45 crc kubenswrapper[4782]: E0202 11:47:45.821441 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:46 crc kubenswrapper[4782]: I0202 11:47:46.435463 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hxxbq" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" containerID="cri-o://205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" gracePeriod=2 Feb 02 11:47:46 crc kubenswrapper[4782]: I0202 11:47:46.972905 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.016520 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.016707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.016822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.018421 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities" (OuterVolumeSpecName: "utilities") pod "97658eff-1922-4b30-b7b4-edc0b5bc31e8" (UID: "97658eff-1922-4b30-b7b4-edc0b5bc31e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.030156 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk" (OuterVolumeSpecName: "kube-api-access-kgcsk") pod "97658eff-1922-4b30-b7b4-edc0b5bc31e8" (UID: "97658eff-1922-4b30-b7b4-edc0b5bc31e8"). InnerVolumeSpecName "kube-api-access-kgcsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.081308 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97658eff-1922-4b30-b7b4-edc0b5bc31e8" (UID: "97658eff-1922-4b30-b7b4-edc0b5bc31e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.119761 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.119806 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.119819 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446167 4782 generic.go:334] "Generic (PLEG): container finished" podID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" exitCode=0 Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446214 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5"} Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446240 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"b170233de7d0c6ed8e5161b038fbec63ac00d1c9bc9cf57ca0a7cd8f776a230b"} Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446256 4782 scope.go:117] "RemoveContainer" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446401 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.472207 4782 scope.go:117] "RemoveContainer" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.488369 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.499223 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.081930 4782 scope.go:117] "RemoveContainer" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.130982 4782 scope.go:117] "RemoveContainer" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" Feb 02 11:47:48 crc kubenswrapper[4782]: E0202 11:47:48.131538 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5\": container with ID starting with 205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5 not found: ID does not exist" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.131565 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5"} err="failed to get container status \"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5\": rpc error: code = NotFound desc = could not find container \"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5\": container with ID starting with 205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5 not found: ID does not exist" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.131587 4782 scope.go:117] "RemoveContainer" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" Feb 02 11:47:48 crc kubenswrapper[4782]: E0202 11:47:48.132068 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc\": container with ID starting with 1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc not found: ID does not exist" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.132105 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc"} err="failed to get container status \"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc\": rpc error: code = NotFound desc = could not find container \"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc\": container with ID starting with 1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc not found: ID does not exist" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.132148 4782 scope.go:117] "RemoveContainer" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" Feb 02 11:47:48 crc kubenswrapper[4782]: E0202 11:47:48.134179 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f\": container with ID starting with b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f not found: ID does not exist" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.134205 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f"} err="failed to get container status \"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f\": rpc error: code = NotFound desc = could not find container \"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f\": container with ID starting with b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f not found: ID does not exist" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.832421 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" path="/var/lib/kubelet/pods/97658eff-1922-4b30-b7b4-edc0b5bc31e8/volumes" Feb 02 11:47:59 crc kubenswrapper[4782]: I0202 11:47:59.821185 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:48:00 crc kubenswrapper[4782]: I0202 11:48:00.578150 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0"} Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.425726 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426773 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerName="container-00" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426794 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerName="container-00" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426813 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426823 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426842 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426850 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426861 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426869 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426892 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426901 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426917 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426925 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426940 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426948 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.427164 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.427193 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.427211 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerName="container-00" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.428882 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.444793 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.517200 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.517291 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.517338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.619421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.619514 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.619557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.620262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.620570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.643590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.807024 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:21 crc kubenswrapper[4782]: I0202 11:48:21.598949 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:21 crc kubenswrapper[4782]: I0202 11:48:21.782720 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerStarted","Data":"399714c1ef96a9767b84c1c51a6986d4169addb87d97fd2f2e12e04ef6793e13"} Feb 02 11:48:22 crc kubenswrapper[4782]: I0202 11:48:22.804531 4782 generic.go:334] "Generic (PLEG): container finished" podID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" exitCode=0 Feb 02 11:48:22 crc kubenswrapper[4782]: I0202 11:48:22.804982 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a"} Feb 02 11:48:23 crc kubenswrapper[4782]: I0202 11:48:23.815415 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerStarted","Data":"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b"} Feb 02 11:48:25 crc kubenswrapper[4782]: I0202 11:48:25.835665 4782 generic.go:334] "Generic (PLEG): container finished" podID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" exitCode=0 Feb 02 11:48:25 crc kubenswrapper[4782]: I0202 11:48:25.835704 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b"} Feb 02 11:48:26 crc kubenswrapper[4782]: I0202 11:48:26.847301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerStarted","Data":"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac"} Feb 02 11:48:26 crc kubenswrapper[4782]: I0202 11:48:26.875608 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rcmff" podStartSLOduration=3.436281966 podStartE2EDuration="6.875587745s" podCreationTimestamp="2026-02-02 11:48:20 +0000 UTC" firstStartedPulling="2026-02-02 11:48:22.806461198 +0000 UTC m=+4182.690653914" lastFinishedPulling="2026-02-02 11:48:26.245766977 +0000 UTC m=+4186.129959693" observedRunningTime="2026-02-02 11:48:26.866870404 +0000 UTC m=+4186.751063120" watchObservedRunningTime="2026-02-02 11:48:26.875587745 +0000 UTC m=+4186.759780461" Feb 02 11:48:30 crc kubenswrapper[4782]: I0202 11:48:30.808174 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:30 crc kubenswrapper[4782]: I0202 11:48:30.808764 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:31 crc kubenswrapper[4782]: I0202 11:48:31.005368 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:31 crc kubenswrapper[4782]: I0202 11:48:31.066094 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:31 crc kubenswrapper[4782]: I0202 11:48:31.242942 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:32 crc kubenswrapper[4782]: I0202 11:48:32.896480 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rcmff" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" containerID="cri-o://000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" gracePeriod=2 Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.426748 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.600570 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.601073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.601100 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.601561 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities" (OuterVolumeSpecName: "utilities") pod "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" (UID: "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.602168 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.607499 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg" (OuterVolumeSpecName: "kube-api-access-47nsg") pod "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" (UID: "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e"). InnerVolumeSpecName "kube-api-access-47nsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.656769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" (UID: "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.704291 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.704343 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909068 4782 generic.go:334] "Generic (PLEG): container finished" podID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" exitCode=0 Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909134 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac"} Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909146 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"399714c1ef96a9767b84c1c51a6986d4169addb87d97fd2f2e12e04ef6793e13"} Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909212 4782 scope.go:117] "RemoveContainer" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.932522 4782 scope.go:117] "RemoveContainer" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.999774 4782 scope.go:117] "RemoveContainer" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:33.999974 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.012883 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.032125 4782 scope.go:117] "RemoveContainer" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" Feb 02 11:48:34 crc kubenswrapper[4782]: E0202 11:48:34.032862 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac\": container with ID starting with 000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac not found: ID does not exist" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.032890 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac"} err="failed to get container status \"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac\": rpc error: code = NotFound desc = could not find container \"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac\": container with ID starting with 000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac not found: ID does not exist" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.032912 4782 scope.go:117] "RemoveContainer" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" Feb 02 11:48:34 crc kubenswrapper[4782]: E0202 11:48:34.033283 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b\": container with ID starting with 9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b not found: ID does not exist" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.033300 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b"} err="failed to get container status \"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b\": rpc error: code = NotFound desc = could not find container \"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b\": container with ID starting with 9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b not found: ID does not exist" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.033316 4782 scope.go:117] "RemoveContainer" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" Feb 02 11:48:34 crc kubenswrapper[4782]: E0202 11:48:34.033753 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a\": container with ID starting with 6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a not found: ID does not exist" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.033777 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a"} err="failed to get container status \"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a\": rpc error: code = NotFound desc = could not find container \"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a\": container with ID starting with 6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a not found: ID does not exist" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.832901 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" path="/var/lib/kubelet/pods/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e/volumes" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.463678 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api/0.log" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.678496 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api-log/0.log" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.834628 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener/0.log" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.940234 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener-log/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.090300 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.107556 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker-log/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.222707 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch_14dddbe2-21a7-417a-8d21-ab97f18aef5d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.468611 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-notification-agent/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.507678 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-central-agent/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.513974 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/proxy-httpd/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.621163 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/sg-core/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.764172 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb_c0c31114-71d7-4d0b-9ad7-74945ed819e3/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.873766 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529_df6c52bb-3b4a-4f78-94d0-edee0f68400c/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.079160 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.097394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api-log/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.360160 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/probe/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.411936 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/cinder-backup/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.802866 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/cinder-scheduler/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.968249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/probe/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.389394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/probe/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.447376 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf_23a1d5dc-9cfd-4c8a-8534-db3075d99574/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.478918 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/cinder-volume/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.690242 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v56zg_6dbc340f-2b20-49aa-8358-26223d367e34/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.807418 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.992237 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.126945 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-httpd/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.150946 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/dnsmasq-dns/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.386571 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-log/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.425984 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-httpd/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.462563 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-log/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.213742 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/1.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.345580 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.402517 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon-log/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.589796 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4jg96_ae3151c2-1646-4d94-93d0-df34ad53d344/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.813071 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-h4png_fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.053982 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500501-wcsmz_9e752213-09b8-4c8e-a5b6-9cfbf9cea168/keystone-cron/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.059333 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79d66b847-whsks_df4aa6a3-22bf-459c-becf-3685a170ae22/keystone-api/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.204026 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6953ab25-8ddb-4ab3-b006-116f6ad534db/kube-state-metrics/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.423509 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjczj_9b66a766-dc87-45dd-a611-d9a30c3f327e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.521971 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api-log/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.529074 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.794265 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/manila-scheduler/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.867473 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/manila-share/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.885549 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/probe/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.057748 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/probe/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.331114 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-api/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.400081 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-httpd/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.561499 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7_e6849945-28f4-4218-97c1-6047c2d0c368/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.335467 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-log/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.455396 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ea60fa1f-5751-4f93-8726-ce0c4be54577/nova-cell0-conductor-conductor/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.684597 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c8598880-0557-414a-bbb1-b5d0cdce0738/nova-cell1-conductor-conductor/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.778142 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-api/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.839739 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_16441e1e-4564-492e-bdce-40eb2652687a/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.061223 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp_dc15a3e1-ea96-499f-a268-b633c15ec75b/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.307917 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-log/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.584056 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_47aff64c-0afc-4b3c-9e90-cbe926943170/nova-scheduler-scheduler/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.789405 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.062296 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/galera/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.119443 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.343018 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.557520 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.567496 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/galera/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.861635 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-metadata/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.897183 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ed19b68-33c0-45b1-acbc-b6e9def4e565/openstackclient/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.019444 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kv4h8_c9cb1af6-ff01-4474-ad02-56938ef7e5a1/openstack-network-exporter/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.140291 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.413687 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.427508 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.480272 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovs-vswitchd/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.742274 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sv8l5_b009ca1c-fc93-4724-9275-c44039256469/ovn-controller/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.860262 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sffk6_4a473fb4-7a3c-4103-bad5-570b683e6222/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.995547 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/ovn-northd/0.log" Feb 02 11:48:48 crc kubenswrapper[4782]: I0202 11:48:48.075338 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/openstack-network-exporter/0.log" Feb 02 11:48:48 crc kubenswrapper[4782]: I0202 11:48:48.296840 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/openstack-network-exporter/0.log" Feb 02 11:48:48 crc kubenswrapper[4782]: I0202 11:48:48.345458 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/ovsdbserver-nb/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.100512 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/openstack-network-exporter/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.200118 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/ovsdbserver-sb/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.323066 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-api/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.576587 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-log/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.706090 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.876996 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.949748 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.035384 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/rabbitmq/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.746343 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.781701 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/rabbitmq/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.825251 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6_cfbbb165-d7b2-48c8-b778-5c66afa9c34d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.133251 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw_6cede59e-7f51-455a-8405-3ae76f40e348/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.158584 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7pvt6_e25dd29c-ad04-40c3-a682-352af21186fe/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.530953 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j5858_c80c4993-adf6-44f8-a084-21920191de7f/ssh-known-hosts-edpm-deployment/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.541387 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a5a266a5-ac00-49e1-9443-def4cebe65ad/tempest-tests-tempest-tests-runner/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.755851 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0a460d0d-7c4a-473e-9df8-ca1b1979cb25/test-operator-logs-container/0.log" Feb 02 11:48:52 crc kubenswrapper[4782]: I0202 11:48:52.344909 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr_03fa384d-760c-4c0a-b58f-91a876eeb3d7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:49:05 crc kubenswrapper[4782]: I0202 11:49:05.050504 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_17f9dd31-25b9-4b3f-82a6-12096f36308a/memcached/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.442183 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.687258 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.732578 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.761916 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.092584 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.119053 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/extract/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.142342 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.428092 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-5ngrn_0aa487d3-a703-4ed6-a44c-bc40eb8272ce/manager/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.437269 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-vj4sh_bfafd643-4798-4519-934d-8ec3e2e677d9/manager/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.677705 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-5vj4j_9ba082c6-4f91-48d6-b5ec-198f46abc135/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.142619 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-v7tzl_b03fe987-deab-47e7-829a-b822ab061f20/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.296323 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-fkwh5_7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.399223 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-7z5k7_224f30b2-1084-4934-8d06-67975a9776ad/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.692018 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v94dv_6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.772712 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-nsx4j_009bc68d-5c70-42ca-9008-152206fd954d/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.226973 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-scr7v_f44c1b55-d189-42dd-9187-90d9e0713790/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.250259 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-w7gld_6b276ac2-533f-43c9-94a1-f0d0e4eb6993/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.324102 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n88d6_3624e93f-9208-4f82-9f55-12381a637262/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.540055 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-l9q78_216a79cc-1b33-43f7-81ff-400a3b6f3d00/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.713756 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-v8zfh_ab3a96ec-3e51-4147-9a58-6596f2c3ad5c/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.834890 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-r9dkb_7e19a281-abaa-462e-abc7-add4acff7865/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.998022 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf_6c7ac81b-49d3-493d-a794-1cffe78eba5e/manager/0.log" Feb 02 11:49:34 crc kubenswrapper[4782]: I0202 11:49:34.287747 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68b945c8c7-jwf5m_c12a72da-af7d-4f2e-b15d-bb90fa6bd818/operator/0.log" Feb 02 11:49:34 crc kubenswrapper[4782]: I0202 11:49:34.519429 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ml428_504a2863-da7c-4a03-b973-0f687ca20746/registry-server/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.033981 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-9ls2x_2f8b3b48-0c03-4922-8966-a3aaca8ebce3/manager/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.162431 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dmncd_6ac6c6b4-9123-4c39-b26f-b07880c1a6c6/manager/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.728286 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b655fd757-r6hxp_5844bcff-6d6e-4cf4-89af-dfecfc748869/manager/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.871330 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jjztq_83a0d24e-3e0c-4d9a-b735-77c74ceec664/operator/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.117173 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-xnzl4_1661d177-41b5-4df5-886f-f3cb7abd1047/manager/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.322012 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ckl5m_c617a97c-fec4-418c-818a-250919ea6882/manager/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.381067 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-82nk8_0fd2f609-78f1-4f82-b405-35b5312baf0d/manager/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.593394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-k7t28_127c9a45-7187-4afb-bb45-c34a45e67e4e/manager/0.log" Feb 02 11:50:00 crc kubenswrapper[4782]: I0202 11:50:00.090602 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wqm6f_2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1/control-plane-machine-set-operator/0.log" Feb 02 11:50:00 crc kubenswrapper[4782]: I0202 11:50:00.260970 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/kube-rbac-proxy/0.log" Feb 02 11:50:00 crc kubenswrapper[4782]: I0202 11:50:00.379508 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/machine-api-operator/0.log" Feb 02 11:50:16 crc kubenswrapper[4782]: I0202 11:50:16.270919 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vcnls_9890a2a1-2fba-4553-87eb-0b70bdc93730/cert-manager-controller/0.log" Feb 02 11:50:16 crc kubenswrapper[4782]: I0202 11:50:16.466566 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jdfqk_49141326-2954-4715-aaa9-86641ac21fa9/cert-manager-cainjector/0.log" Feb 02 11:50:16 crc kubenswrapper[4782]: I0202 11:50:16.570811 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-9h9rr_d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a/cert-manager-webhook/0.log" Feb 02 11:50:22 crc kubenswrapper[4782]: I0202 11:50:22.951835 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:50:22 crc kubenswrapper[4782]: I0202 11:50:22.953586 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:50:29 crc kubenswrapper[4782]: I0202 11:50:29.369763 4782 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:50:29 crc kubenswrapper[4782]: I0202 11:50:29.370383 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.338979 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5zmc7_00048f8e-9669-413d-b215-6a787d5270c0/nmstate-console-plugin/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.493820 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wjctm_3cf88c2a-32c2-4bd3-8832-b480fbfd1afe/nmstate-handler/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.635788 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/kube-rbac-proxy/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.638275 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/nmstate-metrics/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.817179 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pfjs6_371da653-9a38-424f-9069-14e251c45e1b/nmstate-operator/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.893057 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jpc2k_cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a/nmstate-webhook/0.log" Feb 02 11:50:52 crc kubenswrapper[4782]: I0202 11:50:52.951628 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:50:52 crc kubenswrapper[4782]: I0202 11:50:52.952229 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:51:04 crc kubenswrapper[4782]: I0202 11:51:04.170893 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/kube-rbac-proxy/0.log" Feb 02 11:51:04 crc kubenswrapper[4782]: I0202 11:51:04.287189 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/controller/0.log" Feb 02 11:51:04 crc kubenswrapper[4782]: I0202 11:51:04.471757 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.224248 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.269956 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.321749 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.383508 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.494345 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.570086 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.582147 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.639592 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.825632 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.864913 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.965093 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.967861 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/controller/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.243936 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.252932 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr-metrics/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.267058 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy-frr/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.508382 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/reloader/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.571573 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8zl72_a3b12ebe-32d3-4d07-b723-64cd83951d38/frr-k8s-webhook-server/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.919943 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75c875dcc7-xxjwm_46c800cc-f0c4-4bb1-9714-0f9e5f904bc9/manager/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.023841 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-758b4c4d7b-vvspt_78f09d2d-237b-4474-b4b8-f59f49997e44/webhook-server/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.497311 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/kube-rbac-proxy/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.737903 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.815240 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/speaker/0.log" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.951660 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.952158 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.952204 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.953285 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.953362 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0" gracePeriod=600 Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.840576 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0" exitCode=0 Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.840768 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0"} Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.841500 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d"} Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.841660 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.382502 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.599743 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.611952 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.638269 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.847028 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.894051 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.040665 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/extract/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.162467 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.319972 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.366738 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.383399 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.581233 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/extract/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.585256 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.613659 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.373346 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.622718 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.624686 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.668228 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.837059 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.901016 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.136599 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.451938 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/registry-server/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.507884 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.529086 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.583732 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.760781 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.765945 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.160589 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wv9v8_a044a9d0-6c97-46c4-980a-e5d9940e9f74/marketplace-operator/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.357043 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.524441 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.548764 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.548860 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.630609 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/registry-server/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.829792 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.860198 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.920886 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.965084 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/registry-server/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.146778 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.157652 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.252809 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.396519 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.441492 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 11:51:30 crc kubenswrapper[4782]: I0202 11:51:30.077472 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/registry-server/0.log" Feb 02 11:52:56 crc kubenswrapper[4782]: I0202 11:52:56.124287 4782 scope.go:117] "RemoveContainer" containerID="703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8" Feb 02 11:53:52 crc kubenswrapper[4782]: I0202 11:53:52.951867 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:53:52 crc kubenswrapper[4782]: I0202 11:53:52.952488 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:53:56 crc kubenswrapper[4782]: I0202 11:53:56.185272 4782 scope.go:117] "RemoveContainer" containerID="c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292" Feb 02 11:54:07 crc kubenswrapper[4782]: I0202 11:54:07.389455 4782 generic.go:334] "Generic (PLEG): container finished" podID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerID="6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394" exitCode=0 Feb 02 11:54:07 crc kubenswrapper[4782]: I0202 11:54:07.389502 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerDied","Data":"6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394"} Feb 02 11:54:07 crc kubenswrapper[4782]: I0202 11:54:07.390914 4782 scope.go:117] "RemoveContainer" containerID="6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394" Feb 02 11:54:08 crc kubenswrapper[4782]: I0202 11:54:08.021706 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/gather/0.log" Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.170575 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.176975 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" containerID="cri-o://834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d" gracePeriod=2 Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.191624 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.472249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/copy/0.log" Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.473280 4782 generic.go:334] "Generic (PLEG): container finished" podID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerID="834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d" exitCode=143 Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.557185 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/copy/0.log" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.558000 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.653335 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.653491 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.671518 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd" (OuterVolumeSpecName: "kube-api-access-6nkqd") pod "2e9c98b4-2bbb-4602-895c-b5e75a84008e" (UID: "2e9c98b4-2bbb-4602-895c-b5e75a84008e"). InnerVolumeSpecName "kube-api-access-6nkqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.756359 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") on node \"crc\" DevicePath \"\"" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.863490 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2e9c98b4-2bbb-4602-895c-b5e75a84008e" (UID: "2e9c98b4-2bbb-4602-895c-b5e75a84008e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.960126 4782 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.492332 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/copy/0.log" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.492668 4782 scope.go:117] "RemoveContainer" containerID="834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.492822 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.520612 4782 scope.go:117] "RemoveContainer" containerID="6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.845100 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" path="/var/lib/kubelet/pods/2e9c98b4-2bbb-4602-895c-b5e75a84008e/volumes" Feb 02 11:54:22 crc kubenswrapper[4782]: I0202 11:54:22.950824 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:54:22 crc kubenswrapper[4782]: I0202 11:54:22.951184 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.951925 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.952489 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.952542 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.953395 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.953452 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" gracePeriod=600 Feb 02 11:54:53 crc kubenswrapper[4782]: E0202 11:54:53.071879 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.792380 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" exitCode=0 Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.792464 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d"} Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.792890 4782 scope.go:117] "RemoveContainer" containerID="81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0" Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.793595 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:54:53 crc kubenswrapper[4782]: E0202 11:54:53.793993 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:06 crc kubenswrapper[4782]: I0202 11:55:06.821016 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:06 crc kubenswrapper[4782]: E0202 11:55:06.822172 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:18 crc kubenswrapper[4782]: I0202 11:55:18.822761 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:18 crc kubenswrapper[4782]: E0202 11:55:18.826306 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:31 crc kubenswrapper[4782]: I0202 11:55:31.821093 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:31 crc kubenswrapper[4782]: E0202 11:55:31.821690 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:43 crc kubenswrapper[4782]: I0202 11:55:43.822080 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:43 crc kubenswrapper[4782]: E0202 11:55:43.822887 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:55 crc kubenswrapper[4782]: I0202 11:55:55.820702 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:55 crc kubenswrapper[4782]: E0202 11:55:55.821475 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:09 crc kubenswrapper[4782]: I0202 11:56:09.821461 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:09 crc kubenswrapper[4782]: E0202 11:56:09.822410 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:23 crc kubenswrapper[4782]: I0202 11:56:23.821348 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:23 crc kubenswrapper[4782]: E0202 11:56:23.822189 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:36 crc kubenswrapper[4782]: I0202 11:56:36.821208 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:36 crc kubenswrapper[4782]: E0202 11:56:36.821949 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:51 crc kubenswrapper[4782]: I0202 11:56:51.821556 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:51 crc kubenswrapper[4782]: E0202 11:56:51.822470 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:06 crc kubenswrapper[4782]: I0202 11:57:06.821453 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:06 crc kubenswrapper[4782]: E0202 11:57:06.822345 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:18 crc kubenswrapper[4782]: I0202 11:57:18.822084 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:18 crc kubenswrapper[4782]: E0202 11:57:18.825464 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.707435 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708288 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="gather" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708300 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="gather" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708320 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-content" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708326 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-content" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708335 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-utilities" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708345 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-utilities" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708363 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708368 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708376 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708382 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708570 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708587 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="gather" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708597 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.709530 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.717286 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z9thr"/"default-dockercfg-rdfls" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.717306 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z9thr"/"kube-root-ca.crt" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.717287 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z9thr"/"openshift-service-ca.crt" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.740941 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.741046 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.743288 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.844210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.844300 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.846779 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.872311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:29 crc kubenswrapper[4782]: I0202 11:57:29.040995 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:29 crc kubenswrapper[4782]: I0202 11:57:29.377248 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.277623 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerStarted","Data":"a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05"} Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.277935 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerStarted","Data":"ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7"} Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.277947 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerStarted","Data":"9846aff41ff9c9c8f92d0edfc667cec9c2050ffcb980a3253c381947014a8899"} Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.296239 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z9thr/must-gather-nv9p9" podStartSLOduration=2.296219307 podStartE2EDuration="2.296219307s" podCreationTimestamp="2026-02-02 11:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:57:30.293926201 +0000 UTC m=+4730.178118917" watchObservedRunningTime="2026-02-02 11:57:30.296219307 +0000 UTC m=+4730.180412023" Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.829689 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:30 crc kubenswrapper[4782]: E0202 11:57:30.830006 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.053022 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.057669 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.079721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.120931 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.121289 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.121591 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.238076 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.238166 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.238376 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.239417 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.239786 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.288476 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.380306 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.022557 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:57:36 crc kubenswrapper[4782]: W0202 11:57:36.051262 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d73ba53_5789_4d1a_aa3e_57afb54a7351.slice/crio-126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92 WatchSource:0}: Error finding container 126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92: Status 404 returned error can't find the container with id 126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92 Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.344116 4782 generic.go:334] "Generic (PLEG): container finished" podID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" exitCode=0 Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.344261 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f"} Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.344430 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerStarted","Data":"126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92"} Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.346165 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.388662 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/crc-debug-f65pz"] Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.390159 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.561458 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.561551 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.663667 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.663821 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.663915 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.687199 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.742555 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:37 crc kubenswrapper[4782]: I0202 11:57:37.356578 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-f65pz" event={"ID":"ede9dce9-4392-4e23-b259-19b0c8a0bf5c","Type":"ContainerStarted","Data":"fe69ec48947e8494c70fc7778c1e7d4a1b6894069bea864a21d42aa8f068c309"} Feb 02 11:57:38 crc kubenswrapper[4782]: I0202 11:57:38.368859 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerStarted","Data":"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989"} Feb 02 11:57:38 crc kubenswrapper[4782]: I0202 11:57:38.370843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-f65pz" event={"ID":"ede9dce9-4392-4e23-b259-19b0c8a0bf5c","Type":"ContainerStarted","Data":"7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719"} Feb 02 11:57:38 crc kubenswrapper[4782]: I0202 11:57:38.417698 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z9thr/crc-debug-f65pz" podStartSLOduration=2.417675788 podStartE2EDuration="2.417675788s" podCreationTimestamp="2026-02-02 11:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:57:38.411730597 +0000 UTC m=+4738.295923313" watchObservedRunningTime="2026-02-02 11:57:38.417675788 +0000 UTC m=+4738.301868504" Feb 02 11:57:43 crc kubenswrapper[4782]: I0202 11:57:43.420384 4782 generic.go:334] "Generic (PLEG): container finished" podID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" exitCode=0 Feb 02 11:57:43 crc kubenswrapper[4782]: I0202 11:57:43.420470 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989"} Feb 02 11:57:43 crc kubenswrapper[4782]: I0202 11:57:43.821232 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:43 crc kubenswrapper[4782]: E0202 11:57:43.821540 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:44 crc kubenswrapper[4782]: I0202 11:57:44.431412 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerStarted","Data":"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea"} Feb 02 11:57:44 crc kubenswrapper[4782]: I0202 11:57:44.466205 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6b6qq" podStartSLOduration=1.96059737 podStartE2EDuration="9.466181675s" podCreationTimestamp="2026-02-02 11:57:35 +0000 UTC" firstStartedPulling="2026-02-02 11:57:36.345900599 +0000 UTC m=+4736.230093315" lastFinishedPulling="2026-02-02 11:57:43.851484904 +0000 UTC m=+4743.735677620" observedRunningTime="2026-02-02 11:57:44.455824007 +0000 UTC m=+4744.340016743" watchObservedRunningTime="2026-02-02 11:57:44.466181675 +0000 UTC m=+4744.350374401" Feb 02 11:57:45 crc kubenswrapper[4782]: I0202 11:57:45.381427 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:45 crc kubenswrapper[4782]: I0202 11:57:45.382086 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:46 crc kubenswrapper[4782]: I0202 11:57:46.435798 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b6qq" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" probeResult="failure" output=< Feb 02 11:57:46 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:57:46 crc kubenswrapper[4782]: > Feb 02 11:57:54 crc kubenswrapper[4782]: I0202 11:57:54.821745 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:54 crc kubenswrapper[4782]: E0202 11:57:54.823518 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:56 crc kubenswrapper[4782]: I0202 11:57:56.436366 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b6qq" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" probeResult="failure" output=< Feb 02 11:57:56 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:57:56 crc kubenswrapper[4782]: > Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.957185 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.957945 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.959429 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.959583 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:58:05 crc kubenswrapper[4782]: I0202 11:58:05.425299 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:05 crc kubenswrapper[4782]: I0202 11:58:05.481728 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:06 crc kubenswrapper[4782]: I0202 11:58:06.260185 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:58:06 crc kubenswrapper[4782]: I0202 11:58:06.821974 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:06 crc kubenswrapper[4782]: E0202 11:58:06.822485 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.054695 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6b6qq" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" containerID="cri-o://a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" gracePeriod=2 Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.534577 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.616457 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.616816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.616915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.618275 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities" (OuterVolumeSpecName: "utilities") pod "3d73ba53-5789-4d1a-aa3e-57afb54a7351" (UID: "3d73ba53-5789-4d1a-aa3e-57afb54a7351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.670162 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9" (OuterVolumeSpecName: "kube-api-access-glrg9") pod "3d73ba53-5789-4d1a-aa3e-57afb54a7351" (UID: "3d73ba53-5789-4d1a-aa3e-57afb54a7351"). InnerVolumeSpecName "kube-api-access-glrg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.719264 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.719298 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.807799 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d73ba53-5789-4d1a-aa3e-57afb54a7351" (UID: "3d73ba53-5789-4d1a-aa3e-57afb54a7351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.821146 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066631 4782 generic.go:334] "Generic (PLEG): container finished" podID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" exitCode=0 Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066715 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea"} Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066767 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92"} Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066791 4782 scope.go:117] "RemoveContainer" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.067037 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.103825 4782 scope.go:117] "RemoveContainer" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.175132 4782 scope.go:117] "RemoveContainer" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.175618 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.207263 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.266114 4782 scope.go:117] "RemoveContainer" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" Feb 02 11:58:08 crc kubenswrapper[4782]: E0202 11:58:08.266516 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea\": container with ID starting with a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea not found: ID does not exist" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.266557 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea"} err="failed to get container status \"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea\": rpc error: code = NotFound desc = could not find container \"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea\": container with ID starting with a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea not found: ID does not exist" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.266588 4782 scope.go:117] "RemoveContainer" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" Feb 02 11:58:08 crc kubenswrapper[4782]: E0202 11:58:08.268794 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989\": container with ID starting with ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989 not found: ID does not exist" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.268859 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989"} err="failed to get container status \"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989\": rpc error: code = NotFound desc = could not find container \"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989\": container with ID starting with ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989 not found: ID does not exist" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.268911 4782 scope.go:117] "RemoveContainer" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" Feb 02 11:58:08 crc kubenswrapper[4782]: E0202 11:58:08.274872 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f\": container with ID starting with 7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f not found: ID does not exist" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.274945 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f"} err="failed to get container status \"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f\": rpc error: code = NotFound desc = could not find container \"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f\": container with ID starting with 7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f not found: ID does not exist" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.832681 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" path="/var/lib/kubelet/pods/3d73ba53-5789-4d1a-aa3e-57afb54a7351/volumes" Feb 02 11:58:18 crc kubenswrapper[4782]: I0202 11:58:18.153056 4782 generic.go:334] "Generic (PLEG): container finished" podID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerID="7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719" exitCode=0 Feb 02 11:58:18 crc kubenswrapper[4782]: I0202 11:58:18.153342 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-f65pz" event={"ID":"ede9dce9-4392-4e23-b259-19b0c8a0bf5c","Type":"ContainerDied","Data":"7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719"} Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.268255 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.313875 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-f65pz"] Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.325252 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-f65pz"] Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.443906 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.443998 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.444500 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host" (OuterVolumeSpecName: "host") pod "ede9dce9-4392-4e23-b259-19b0c8a0bf5c" (UID: "ede9dce9-4392-4e23-b259-19b0c8a0bf5c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.463581 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz" (OuterVolumeSpecName: "kube-api-access-kc6kz") pod "ede9dce9-4392-4e23-b259-19b0c8a0bf5c" (UID: "ede9dce9-4392-4e23-b259-19b0c8a0bf5c"). InnerVolumeSpecName "kube-api-access-kc6kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.546198 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.546237 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.171495 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe69ec48947e8494c70fc7778c1e7d4a1b6894069bea864a21d42aa8f068c309" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.171554 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.563793 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/crc-debug-xnbq8"] Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564566 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-utilities" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564584 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-utilities" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564593 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564599 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564615 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerName="container-00" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564621 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerName="container-00" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564659 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-content" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564666 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-content" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564905 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerName="container-00" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564930 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.565610 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.666245 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.666404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.767937 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.768112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.768281 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.798382 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.850425 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.855516 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" path="/var/lib/kubelet/pods/ede9dce9-4392-4e23-b259-19b0c8a0bf5c/volumes" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.862112 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.891134 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:21 crc kubenswrapper[4782]: W0202 11:58:21.489437 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96087c2d_a148_4006_b4d2_d02fab270407.slice/crio-682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511 WatchSource:0}: Error finding container 682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511: Status 404 returned error can't find the container with id 682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511 Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.188897 4782 generic.go:334] "Generic (PLEG): container finished" podID="96087c2d-a148-4006-b4d2-d02fab270407" containerID="a9fd7797efa31c327d9d58648ecbf7fcdf9d4bdbfe828047ee16f1309a15a956" exitCode=0 Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.189338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" event={"ID":"96087c2d-a148-4006-b4d2-d02fab270407","Type":"ContainerDied","Data":"a9fd7797efa31c327d9d58648ecbf7fcdf9d4bdbfe828047ee16f1309a15a956"} Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.189368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" event={"ID":"96087c2d-a148-4006-b4d2-d02fab270407","Type":"ContainerStarted","Data":"682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511"} Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.573422 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-xnbq8"] Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.581313 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-xnbq8"] Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.360720 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.441607 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"96087c2d-a148-4006-b4d2-d02fab270407\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.441894 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"96087c2d-a148-4006-b4d2-d02fab270407\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.441963 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host" (OuterVolumeSpecName: "host") pod "96087c2d-a148-4006-b4d2-d02fab270407" (UID: "96087c2d-a148-4006-b4d2-d02fab270407"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.442436 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.447269 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp" (OuterVolumeSpecName: "kube-api-access-kh4tp") pod "96087c2d-a148-4006-b4d2-d02fab270407" (UID: "96087c2d-a148-4006-b4d2-d02fab270407"). InnerVolumeSpecName "kube-api-access-kh4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.545215 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.802043 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/crc-debug-vksdl"] Feb 02 11:58:23 crc kubenswrapper[4782]: E0202 11:58:23.802488 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96087c2d-a148-4006-b4d2-d02fab270407" containerName="container-00" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.802510 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="96087c2d-a148-4006-b4d2-d02fab270407" containerName="container-00" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.802741 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="96087c2d-a148-4006-b4d2-d02fab270407" containerName="container-00" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.803355 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.850105 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.850314 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.952376 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.952832 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.953025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.986506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.119277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.211381 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-vksdl" event={"ID":"edb9a738-cecb-460b-84b4-7b04cff0a2f5","Type":"ContainerStarted","Data":"b0e9df1e50ca37117d28e9ddb1f196beea94ef1d7851b2597f2f7b13a32c9ca3"} Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.215734 4782 scope.go:117] "RemoveContainer" containerID="a9fd7797efa31c327d9d58648ecbf7fcdf9d4bdbfe828047ee16f1309a15a956" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.215800 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.830837 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96087c2d-a148-4006-b4d2-d02fab270407" path="/var/lib/kubelet/pods/96087c2d-a148-4006-b4d2-d02fab270407/volumes" Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.227062 4782 generic.go:334] "Generic (PLEG): container finished" podID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerID="1c7ae012e4d8b0b39bc93c5bf707a4721768122f6b9013428cc6daf0c85609b7" exitCode=0 Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.227269 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-vksdl" event={"ID":"edb9a738-cecb-460b-84b4-7b04cff0a2f5","Type":"ContainerDied","Data":"1c7ae012e4d8b0b39bc93c5bf707a4721768122f6b9013428cc6daf0c85609b7"} Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.278129 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-vksdl"] Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.287313 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-vksdl"] Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.339527 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.417943 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.418302 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.419485 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host" (OuterVolumeSpecName: "host") pod "edb9a738-cecb-460b-84b4-7b04cff0a2f5" (UID: "edb9a738-cecb-460b-84b4-7b04cff0a2f5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.427970 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4" (OuterVolumeSpecName: "kube-api-access-4npz4") pod "edb9a738-cecb-460b-84b4-7b04cff0a2f5" (UID: "edb9a738-cecb-460b-84b4-7b04cff0a2f5"). InnerVolumeSpecName "kube-api-access-4npz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.520280 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.520575 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.835999 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" path="/var/lib/kubelet/pods/edb9a738-cecb-460b-84b4-7b04cff0a2f5/volumes" Feb 02 11:58:27 crc kubenswrapper[4782]: I0202 11:58:27.245227 4782 scope.go:117] "RemoveContainer" containerID="1c7ae012e4d8b0b39bc93c5bf707a4721768122f6b9013428cc6daf0c85609b7" Feb 02 11:58:27 crc kubenswrapper[4782]: I0202 11:58:27.245302 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:32 crc kubenswrapper[4782]: I0202 11:58:32.826912 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:32 crc kubenswrapper[4782]: E0202 11:58:32.827802 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:43 crc kubenswrapper[4782]: I0202 11:58:43.821367 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:43 crc kubenswrapper[4782]: E0202 11:58:43.822151 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:54 crc kubenswrapper[4782]: I0202 11:58:54.821635 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:54 crc kubenswrapper[4782]: E0202 11:58:54.822341 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:06 crc kubenswrapper[4782]: I0202 11:59:06.821629 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:06 crc kubenswrapper[4782]: E0202 11:59:06.822364 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:18 crc kubenswrapper[4782]: I0202 11:59:18.821512 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:18 crc kubenswrapper[4782]: E0202 11:59:18.822296 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.800361 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api/0.log" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.821213 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:33 crc kubenswrapper[4782]: E0202 11:59:33.821569 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.949080 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api-log/0.log" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.992466 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.166570 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener-log/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.269655 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.324092 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker-log/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.579145 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch_14dddbe2-21a7-417a-8d21-ab97f18aef5d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.615667 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-central-agent/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.728988 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-notification-agent/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.777173 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/proxy-httpd/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.872097 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/sg-core/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.003734 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb_c0c31114-71d7-4d0b-9ad7-74945ed819e3/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.122212 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529_df6c52bb-3b4a-4f78-94d0-edee0f68400c/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.352062 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.399550 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api-log/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.605927 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/probe/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.709778 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 11:59:35 crc kubenswrapper[4782]: E0202 11:59:35.710141 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerName="container-00" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.710152 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerName="container-00" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.710359 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerName="container-00" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.711610 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.733813 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.782017 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/cinder-backup/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.825912 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.826197 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.826304 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.880976 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/cinder-scheduler/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.938069 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.938283 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.938664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.941132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.941463 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.949185 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.991960 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.008542 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.019198 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.039975 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.040029 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.040073 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.041197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.142820 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.143482 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.143756 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.143936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.144311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.193301 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.242106 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/probe/0.log" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.373881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.767258 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/cinder-volume/0.log" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.879811 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/probe/0.log" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.967960 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.090854 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.382235 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf_23a1d5dc-9cfd-4c8a-8534-db3075d99574/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.414473 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v56zg_6dbc340f-2b20-49aa-8358-26223d367e34/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.655794 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.846497 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" exitCode=0 Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.846581 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.846614 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerStarted","Data":"15e47b0c929ab147f64b266f97c9bf69f4e09af2e44441ef9ec8b37376116768"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.849905 4782 generic.go:334] "Generic (PLEG): container finished" podID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerID="8645283328e2870a1a7611b6e089ddd27bdceccde25a86c19a9a97956315b778" exitCode=0 Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.849939 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"8645283328e2870a1a7611b6e089ddd27bdceccde25a86c19a9a97956315b778"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.849961 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerStarted","Data":"01021d3d89e76efb5bc514096c9256feb2df9b6acebd486cb66d93888f41cd54"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.923004 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.947663 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-httpd/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.146455 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/dnsmasq-dns/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.173446 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-log/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.254818 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-httpd/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.487218 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-log/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.518238 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/1.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.681693 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.885468 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerStarted","Data":"450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f"} Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.053829 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4jg96_ae3151c2-1646-4d94-93d0-df34ad53d344/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.083195 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon-log/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.256249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-h4png_fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.584374 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500501-wcsmz_9e752213-09b8-4c8e-a5b6-9cfbf9cea168/keystone-cron/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.606434 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79d66b847-whsks_df4aa6a3-22bf-459c-becf-3685a170ae22/keystone-api/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.875330 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6953ab25-8ddb-4ab3-b006-116f6ad534db/kube-state-metrics/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.895564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerStarted","Data":"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3"} Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.908227 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.910421 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.944134 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.945206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.945316 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.945336 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.012159 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjczj_9b66a766-dc87-45dd-a611-d9a30c3f327e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.047954 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048717 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048780 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.092130 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.227799 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.337281 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api-log/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.497603 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.647961 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/manila-scheduler/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.894329 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.905069 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/probe/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.910674 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerStarted","Data":"5e5029526f819ea498c14b6bbcab80d1fcb73ca7520c14a5988e7d79495012c8"} Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.914532 4782 generic.go:334] "Generic (PLEG): container finished" podID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerID="450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f" exitCode=0 Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.914621 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.176400 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/probe/0.log" Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.223477 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/manila-share/0.log" Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.624574 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-httpd/0.log" Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.924572 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" exitCode=0 Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.924740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.931135 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerStarted","Data":"c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.941925 4782 generic.go:334] "Generic (PLEG): container finished" podID="502a801a-3da6-4a10-9734-c302cb103c44" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" exitCode=0 Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.943142 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.997687 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-api/0.log" Feb 02 11:59:42 crc kubenswrapper[4782]: I0202 11:59:42.019062 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6p4xn" podStartSLOduration=3.502890414 podStartE2EDuration="7.019044236s" podCreationTimestamp="2026-02-02 11:59:35 +0000 UTC" firstStartedPulling="2026-02-02 11:59:37.851953209 +0000 UTC m=+4857.736145925" lastFinishedPulling="2026-02-02 11:59:41.368107041 +0000 UTC m=+4861.252299747" observedRunningTime="2026-02-02 11:59:42.017891783 +0000 UTC m=+4861.902084499" watchObservedRunningTime="2026-02-02 11:59:42.019044236 +0000 UTC m=+4861.903236952" Feb 02 11:59:42 crc kubenswrapper[4782]: I0202 11:59:42.269899 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7_e6849945-28f4-4218-97c1-6047c2d0c368/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:42 crc kubenswrapper[4782]: I0202 11:59:42.954969 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerStarted","Data":"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49"} Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.005267 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnbx7" podStartSLOduration=3.304260937 podStartE2EDuration="8.00524239s" podCreationTimestamp="2026-02-02 11:59:35 +0000 UTC" firstStartedPulling="2026-02-02 11:59:37.848344085 +0000 UTC m=+4857.732536801" lastFinishedPulling="2026-02-02 11:59:42.549325538 +0000 UTC m=+4862.433518254" observedRunningTime="2026-02-02 11:59:43.002269174 +0000 UTC m=+4862.886461890" watchObservedRunningTime="2026-02-02 11:59:43.00524239 +0000 UTC m=+4862.889435106" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.167209 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-log/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.445902 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ea60fa1f-5751-4f93-8726-ce0c4be54577/nova-cell0-conductor-conductor/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.759502 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-api/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.813523 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c8598880-0557-414a-bbb1-b5d0cdce0738/nova-cell1-conductor-conductor/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.970314 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerStarted","Data":"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad"} Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.258114 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_16441e1e-4564-492e-bdce-40eb2652687a/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.432559 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp_dc15a3e1-ea96-499f-a268-b633c15ec75b/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.776274 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-log/0.log" Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.981108 4782 generic.go:334] "Generic (PLEG): container finished" podID="502a801a-3da6-4a10-9734-c302cb103c44" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" exitCode=0 Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.981163 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad"} Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.267019 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.543539 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/galera/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.696927 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.774850 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_47aff64c-0afc-4b3c-9e90-cbe926943170/nova-scheduler-scheduler/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.825148 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:45 crc kubenswrapper[4782]: E0202 11:59:45.825382 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.016980 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerStarted","Data":"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f"} Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.042689 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.043249 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.045463 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2r5cc" podStartSLOduration=3.407350013 podStartE2EDuration="7.045450434s" podCreationTimestamp="2026-02-02 11:59:39 +0000 UTC" firstStartedPulling="2026-02-02 11:59:41.943954105 +0000 UTC m=+4861.828146821" lastFinishedPulling="2026-02-02 11:59:45.582054526 +0000 UTC m=+4865.466247242" observedRunningTime="2026-02-02 11:59:46.043190979 +0000 UTC m=+4865.927383705" watchObservedRunningTime="2026-02-02 11:59:46.045450434 +0000 UTC m=+4865.929643150" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.240927 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.371746 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/galera/0.log" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.374372 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.374410 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.420879 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.538818 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.807678 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ed19b68-33c0-45b1-acbc-b6e9def4e565/openstackclient/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.094863 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wnbx7" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" probeResult="failure" output=< Feb 02 11:59:47 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:59:47 crc kubenswrapper[4782]: > Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.097457 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.160497 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-metadata/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.164618 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kv4h8_c9cb1af6-ff01-4474-ad02-56938ef7e5a1/openstack-network-exporter/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.278903 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.496318 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.552774 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovs-vswitchd/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.730459 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sv8l5_b009ca1c-fc93-4724-9275-c44039256469/ovn-controller/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.772712 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.060008 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/openstack-network-exporter/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.095500 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sffk6_4a473fb4-7a3c-4103-bad5-570b683e6222/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.188169 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/ovn-northd/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.398080 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/openstack-network-exporter/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.490281 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/ovsdbserver-nb/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.688030 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/ovsdbserver-sb/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.689910 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/openstack-network-exporter/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.877103 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-api/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.075989 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-log/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.160825 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.502973 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/rabbitmq/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.547261 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.636272 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.855003 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.856616 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/rabbitmq/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.039486 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6_cfbbb165-d7b2-48c8-b778-5c66afa9c34d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.228116 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.229062 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.251995 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw_6cede59e-7f51-455a-8405-3ae76f40e348/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.287570 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.376802 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7pvt6_e25dd29c-ad04-40c3-a682-352af21186fe/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.491630 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j5858_c80c4993-adf6-44f8-a084-21920191de7f/ssh-known-hosts-edpm-deployment/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.709907 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a5a266a5-ac00-49e1-9443-def4cebe65ad/tempest-tests-tempest-tests-runner/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.901767 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0a460d0d-7c4a-473e-9df8-ca1b1979cb25/test-operator-logs-container/0.log" Feb 02 11:59:51 crc kubenswrapper[4782]: I0202 11:59:51.100164 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr_03fa384d-760c-4c0a-b58f-91a876eeb3d7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:51 crc kubenswrapper[4782]: I0202 11:59:51.119901 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:52 crc kubenswrapper[4782]: I0202 11:59:52.696390 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:52 crc kubenswrapper[4782]: I0202 11:59:52.696660 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6p4xn" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" containerID="cri-o://c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1" gracePeriod=2 Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.124765 4782 generic.go:334] "Generic (PLEG): container finished" podID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerID="c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1" exitCode=0 Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.125213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1"} Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.252179 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.317472 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.317931 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.318008 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.318159 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities" (OuterVolumeSpecName: "utilities") pod "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" (UID: "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.318826 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.353233 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48" (OuterVolumeSpecName: "kube-api-access-cdf48") pod "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" (UID: "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d"). InnerVolumeSpecName "kube-api-access-cdf48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.392000 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" (UID: "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.420352 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.420392 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.701282 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.147143 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2r5cc" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" containerID="cri-o://664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" gracePeriod=2 Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.147550 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.150057 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"01021d3d89e76efb5bc514096c9256feb2df9b6acebd486cb66d93888f41cd54"} Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.150099 4782 scope.go:117] "RemoveContainer" containerID="c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.200027 4782 scope.go:117] "RemoveContainer" containerID="450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.224440 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.241373 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.286627 4782 scope.go:117] "RemoveContainer" containerID="8645283328e2870a1a7611b6e089ddd27bdceccde25a86c19a9a97956315b778" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.735352 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.839182 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" path="/var/lib/kubelet/pods/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d/volumes" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.867692 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"502a801a-3da6-4a10-9734-c302cb103c44\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.868162 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"502a801a-3da6-4a10-9734-c302cb103c44\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.868258 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"502a801a-3da6-4a10-9734-c302cb103c44\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.868900 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities" (OuterVolumeSpecName: "utilities") pod "502a801a-3da6-4a10-9734-c302cb103c44" (UID: "502a801a-3da6-4a10-9734-c302cb103c44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.877835 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw" (OuterVolumeSpecName: "kube-api-access-872cw") pod "502a801a-3da6-4a10-9734-c302cb103c44" (UID: "502a801a-3da6-4a10-9734-c302cb103c44"). InnerVolumeSpecName "kube-api-access-872cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.943083 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "502a801a-3da6-4a10-9734-c302cb103c44" (UID: "502a801a-3da6-4a10-9734-c302cb103c44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.970473 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.970510 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.970526 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169668 4782 generic.go:334] "Generic (PLEG): container finished" podID="502a801a-3da6-4a10-9734-c302cb103c44" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" exitCode=0 Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f"} Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"5e5029526f819ea498c14b6bbcab80d1fcb73ca7520c14a5988e7d79495012c8"} Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169764 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169775 4782 scope.go:117] "RemoveContainer" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.201224 4782 scope.go:117] "RemoveContainer" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.236812 4782 scope.go:117] "RemoveContainer" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.241677 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.259681 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.260896 4782 scope.go:117] "RemoveContainer" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" Feb 02 11:59:55 crc kubenswrapper[4782]: E0202 11:59:55.263766 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f\": container with ID starting with 664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f not found: ID does not exist" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.263807 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f"} err="failed to get container status \"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f\": rpc error: code = NotFound desc = could not find container \"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f\": container with ID starting with 664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f not found: ID does not exist" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.263831 4782 scope.go:117] "RemoveContainer" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" Feb 02 11:59:55 crc kubenswrapper[4782]: E0202 11:59:55.265812 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad\": container with ID starting with c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad not found: ID does not exist" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.265852 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad"} err="failed to get container status \"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad\": rpc error: code = NotFound desc = could not find container \"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad\": container with ID starting with c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad not found: ID does not exist" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.265879 4782 scope.go:117] "RemoveContainer" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" Feb 02 11:59:55 crc kubenswrapper[4782]: E0202 11:59:55.268131 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527\": container with ID starting with ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527 not found: ID does not exist" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.268154 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527"} err="failed to get container status \"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527\": rpc error: code = NotFound desc = could not find container \"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527\": container with ID starting with ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527 not found: ID does not exist" Feb 02 11:59:56 crc kubenswrapper[4782]: I0202 11:59:56.823829 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:56 crc kubenswrapper[4782]: I0202 11:59:56.837419 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502a801a-3da6-4a10-9734-c302cb103c44" path="/var/lib/kubelet/pods/502a801a-3da6-4a10-9734-c302cb103c44/volumes" Feb 02 11:59:57 crc kubenswrapper[4782]: I0202 11:59:57.152925 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wnbx7" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" probeResult="failure" output=< Feb 02 11:59:57 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:59:57 crc kubenswrapper[4782]: > Feb 02 11:59:58 crc kubenswrapper[4782]: I0202 11:59:58.218943 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515"} Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.171082 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc"] Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.171989 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172005 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172020 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172026 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172045 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172052 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172067 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172074 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172086 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172093 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172135 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172142 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172336 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172367 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.173061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.175777 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.176024 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.190175 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc"] Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.284553 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.284920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.285000 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.387466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.387750 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.387850 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.389041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.398536 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.415771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.517071 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:01 crc kubenswrapper[4782]: I0202 12:00:01.079309 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc"] Feb 02 12:00:01 crc kubenswrapper[4782]: W0202 12:00:01.092205 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a2b04cd_7ec4_4adc_b5a0_5bf713a82308.slice/crio-720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18 WatchSource:0}: Error finding container 720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18: Status 404 returned error can't find the container with id 720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18 Feb 02 12:00:01 crc kubenswrapper[4782]: I0202 12:00:01.263096 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerStarted","Data":"7a62587383e5551f5bfde9cfb7e6d6d828dbf36bd44c98586b1a92cd5268a467"} Feb 02 12:00:01 crc kubenswrapper[4782]: I0202 12:00:01.263136 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerStarted","Data":"720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18"} Feb 02 12:00:02 crc kubenswrapper[4782]: I0202 12:00:02.294928 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" podStartSLOduration=2.294908134 podStartE2EDuration="2.294908134s" podCreationTimestamp="2026-02-02 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:00:02.291767234 +0000 UTC m=+4882.175959950" watchObservedRunningTime="2026-02-02 12:00:02.294908134 +0000 UTC m=+4882.179100850" Feb 02 12:00:03 crc kubenswrapper[4782]: I0202 12:00:03.283003 4782 generic.go:334] "Generic (PLEG): container finished" podID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerID="7a62587383e5551f5bfde9cfb7e6d6d828dbf36bd44c98586b1a92cd5268a467" exitCode=0 Feb 02 12:00:03 crc kubenswrapper[4782]: I0202 12:00:03.283044 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerDied","Data":"7a62587383e5551f5bfde9cfb7e6d6d828dbf36bd44c98586b1a92cd5268a467"} Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.771493 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.798603 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.798800 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.798819 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.800957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" (UID: "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.807760 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29" (OuterVolumeSpecName: "kube-api-access-snl29") pod "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" (UID: "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308"). InnerVolumeSpecName "kube-api-access-snl29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.808541 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" (UID: "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.900818 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.900844 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.900853 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.302739 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerDied","Data":"720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18"} Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.302836 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18" Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.302891 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.393054 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.397999 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.110492 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.166118 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.258160 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_17f9dd31-25b9-4b3f-82a6-12096f36308a/memcached/0.log" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.834402 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" path="/var/lib/kubelet/pods/49267abf-7f15-4460-bbc4-d7b0cc162817/volumes" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.915393 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.317164 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnbx7" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" containerID="cri-o://a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" gracePeriod=2 Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.807547 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.892610 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"1c2f854e-81d7-41dc-a93a-199f54f82561\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.892790 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"1c2f854e-81d7-41dc-a93a-199f54f82561\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.892850 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"1c2f854e-81d7-41dc-a93a-199f54f82561\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.902055 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities" (OuterVolumeSpecName: "utilities") pod "1c2f854e-81d7-41dc-a93a-199f54f82561" (UID: "1c2f854e-81d7-41dc-a93a-199f54f82561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.947126 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c2f854e-81d7-41dc-a93a-199f54f82561" (UID: "1c2f854e-81d7-41dc-a93a-199f54f82561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.995120 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.995165 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331810 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" exitCode=0 Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331871 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49"} Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"15e47b0c929ab147f64b266f97c9bf69f4e09af2e44441ef9ec8b37376116768"} Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331937 4782 scope.go:117] "RemoveContainer" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331970 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.357962 4782 scope.go:117] "RemoveContainer" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.506229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8" (OuterVolumeSpecName: "kube-api-access-hqfc8") pod "1c2f854e-81d7-41dc-a93a-199f54f82561" (UID: "1c2f854e-81d7-41dc-a93a-199f54f82561"). InnerVolumeSpecName "kube-api-access-hqfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.507197 4782 scope.go:117] "RemoveContainer" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.510738 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.584370 4782 scope.go:117] "RemoveContainer" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" Feb 02 12:00:08 crc kubenswrapper[4782]: E0202 12:00:08.584847 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49\": container with ID starting with a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49 not found: ID does not exist" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.584877 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49"} err="failed to get container status \"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49\": rpc error: code = NotFound desc = could not find container \"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49\": container with ID starting with a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49 not found: ID does not exist" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.584900 4782 scope.go:117] "RemoveContainer" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" Feb 02 12:00:08 crc kubenswrapper[4782]: E0202 12:00:08.585120 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3\": container with ID starting with 0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3 not found: ID does not exist" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.585141 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3"} err="failed to get container status \"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3\": rpc error: code = NotFound desc = could not find container \"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3\": container with ID starting with 0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3 not found: ID does not exist" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.585154 4782 scope.go:117] "RemoveContainer" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" Feb 02 12:00:08 crc kubenswrapper[4782]: E0202 12:00:08.585366 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1\": container with ID starting with 6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1 not found: ID does not exist" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.585392 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1"} err="failed to get container status \"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1\": rpc error: code = NotFound desc = could not find container \"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1\": container with ID starting with 6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1 not found: ID does not exist" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.723806 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.738982 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.835968 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" path="/var/lib/kubelet/pods/1c2f854e-81d7-41dc-a93a-199f54f82561/volumes" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.192099 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.450106 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.475589 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.498100 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.655737 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.666153 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.740971 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/extract/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.318284 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-vj4sh_bfafd643-4798-4519-934d-8ec3e2e677d9/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.350956 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-5ngrn_0aa487d3-a703-4ed6-a44c-bc40eb8272ce/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.523682 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-5vj4j_9ba082c6-4f91-48d6-b5ec-198f46abc135/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.629576 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-v7tzl_b03fe987-deab-47e7-829a-b822ab061f20/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.773589 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-fkwh5_7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.880539 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-7z5k7_224f30b2-1084-4934-8d06-67975a9776ad/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.192031 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v94dv_6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.279633 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-nsx4j_009bc68d-5c70-42ca-9008-152206fd954d/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.461886 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-w7gld_6b276ac2-533f-43c9-94a1-f0d0e4eb6993/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.539447 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-scr7v_f44c1b55-d189-42dd-9187-90d9e0713790/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.730550 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n88d6_3624e93f-9208-4f82-9f55-12381a637262/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.889594 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-l9q78_216a79cc-1b33-43f7-81ff-400a3b6f3d00/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.074020 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-v8zfh_ab3a96ec-3e51-4147-9a58-6596f2c3ad5c/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.096042 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-r9dkb_7e19a281-abaa-462e-abc7-add4acff7865/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.324284 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf_6c7ac81b-49d3-493d-a794-1cffe78eba5e/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.595616 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68b945c8c7-jwf5m_c12a72da-af7d-4f2e-b15d-bb90fa6bd818/operator/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.758410 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ml428_504a2863-da7c-4a03-b973-0f687ca20746/registry-server/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.107827 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-9ls2x_2f8b3b48-0c03-4922-8966-a3aaca8ebce3/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.155719 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dmncd_6ac6c6b4-9123-4c39-b26f-b07880c1a6c6/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.462844 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jjztq_83a0d24e-3e0c-4d9a-b735-77c74ceec664/operator/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.547223 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-xnzl4_1661d177-41b5-4df5-886f-f3cb7abd1047/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.822668 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ckl5m_c617a97c-fec4-418c-818a-250919ea6882/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.918571 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-82nk8_0fd2f609-78f1-4f82-b405-35b5312baf0d/manager/0.log" Feb 02 12:00:32 crc kubenswrapper[4782]: I0202 12:00:32.012308 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b655fd757-r6hxp_5844bcff-6d6e-4cf4-89af-dfecfc748869/manager/0.log" Feb 02 12:00:32 crc kubenswrapper[4782]: I0202 12:00:32.074501 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-k7t28_127c9a45-7187-4afb-bb45-c34a45e67e4e/manager/0.log" Feb 02 12:00:56 crc kubenswrapper[4782]: I0202 12:00:56.449444 4782 scope.go:117] "RemoveContainer" containerID="f26205a7a090662d7627013616952d20f36db3d708e8b6aa67a214bacd583878" Feb 02 12:00:58 crc kubenswrapper[4782]: I0202 12:00:58.640874 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wqm6f_2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1/control-plane-machine-set-operator/0.log" Feb 02 12:00:58 crc kubenswrapper[4782]: I0202 12:00:58.693472 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/kube-rbac-proxy/0.log" Feb 02 12:00:58 crc kubenswrapper[4782]: I0202 12:00:58.787284 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/machine-api-operator/0.log" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.162591 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500561-wlnfp"] Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163437 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163455 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163468 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-utilities" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163476 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-utilities" Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163509 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163518 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163533 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-content" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163541 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-content" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163813 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163832 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.172687 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.175580 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-wlnfp"] Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270476 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270666 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270819 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372601 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372838 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.413796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.416447 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.417326 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.417509 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.511693 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.014035 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-wlnfp"] Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.756319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerStarted","Data":"b7169486e5ac45b8766630850cf6d99e05138fec07cba7881059a5a96e2ff8b6"} Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.756986 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerStarted","Data":"f26055ff4300e33bae45d01c0e09490da1d2d8ba8a243f0d5e424909fd518bed"} Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.780613 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500561-wlnfp" podStartSLOduration=1.7805892829999999 podStartE2EDuration="1.780589283s" podCreationTimestamp="2026-02-02 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:01:01.774992692 +0000 UTC m=+4941.659185408" watchObservedRunningTime="2026-02-02 12:01:01.780589283 +0000 UTC m=+4941.664781989" Feb 02 12:01:06 crc kubenswrapper[4782]: I0202 12:01:06.294742 4782 generic.go:334] "Generic (PLEG): container finished" podID="7aa564ec-fd60-4e15-a333-00c20608ec39" containerID="b7169486e5ac45b8766630850cf6d99e05138fec07cba7881059a5a96e2ff8b6" exitCode=0 Feb 02 12:01:06 crc kubenswrapper[4782]: I0202 12:01:06.294818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerDied","Data":"b7169486e5ac45b8766630850cf6d99e05138fec07cba7881059a5a96e2ff8b6"} Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.779742 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.927813 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.927997 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.928046 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.928124 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.965246 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft" (OuterVolumeSpecName: "kube-api-access-phzft") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "kube-api-access-phzft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.967314 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.978181 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.002904 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data" (OuterVolumeSpecName: "config-data") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030208 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030239 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030250 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030258 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.310280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerDied","Data":"f26055ff4300e33bae45d01c0e09490da1d2d8ba8a243f0d5e424909fd518bed"} Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.310320 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f26055ff4300e33bae45d01c0e09490da1d2d8ba8a243f0d5e424909fd518bed" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.310350 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:15 crc kubenswrapper[4782]: I0202 12:01:15.271340 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vcnls_9890a2a1-2fba-4553-87eb-0b70bdc93730/cert-manager-controller/0.log" Feb 02 12:01:16 crc kubenswrapper[4782]: I0202 12:01:16.109722 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-9h9rr_d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a/cert-manager-webhook/0.log" Feb 02 12:01:16 crc kubenswrapper[4782]: I0202 12:01:16.209489 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jdfqk_49141326-2954-4715-aaa9-86641ac21fa9/cert-manager-cainjector/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.626635 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5zmc7_00048f8e-9669-413d-b215-6a787d5270c0/nmstate-console-plugin/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.822530 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wjctm_3cf88c2a-32c2-4bd3-8832-b480fbfd1afe/nmstate-handler/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.940052 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/kube-rbac-proxy/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.997879 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/nmstate-metrics/0.log" Feb 02 12:01:30 crc kubenswrapper[4782]: I0202 12:01:30.071302 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pfjs6_371da653-9a38-424f-9069-14e251c45e1b/nmstate-operator/0.log" Feb 02 12:01:30 crc kubenswrapper[4782]: I0202 12:01:30.241238 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jpc2k_cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a/nmstate-webhook/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.408619 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/controller/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.411149 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/kube-rbac-proxy/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.663803 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.801102 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.857922 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.897249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.919375 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.091276 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.141058 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.145145 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.145446 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.394022 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.396357 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.415070 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.436342 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/controller/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.576232 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.663707 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy-frr/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.703681 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.879515 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/reloader/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.957591 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8zl72_a3b12ebe-32d3-4d07-b723-64cd83951d38/frr-k8s-webhook-server/0.log" Feb 02 12:02:01 crc kubenswrapper[4782]: I0202 12:02:01.306167 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75c875dcc7-xxjwm_46c800cc-f0c4-4bb1-9714-0f9e5f904bc9/manager/0.log" Feb 02 12:02:01 crc kubenswrapper[4782]: I0202 12:02:01.521058 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-758b4c4d7b-vvspt_78f09d2d-237b-4474-b4b8-f59f49997e44/webhook-server/0.log" Feb 02 12:02:01 crc kubenswrapper[4782]: I0202 12:02:01.619767 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/kube-rbac-proxy/0.log" Feb 02 12:02:02 crc kubenswrapper[4782]: I0202 12:02:02.135777 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr/0.log" Feb 02 12:02:02 crc kubenswrapper[4782]: I0202 12:02:02.304893 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/speaker/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.532999 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.774337 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.787021 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.829430 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.063875 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.078485 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.153882 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/extract/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.587227 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.814262 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.859065 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.875366 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.124427 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.141085 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.141317 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/extract/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.419032 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.606031 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.628947 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.693931 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.924235 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.926884 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.213246 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.624341 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/registry-server/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.631375 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.660980 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.676801 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.857758 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.859312 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.146893 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wv9v8_a044a9d0-6c97-46c4-980a-e5d9940e9f74/marketplace-operator/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.364119 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.642463 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.691358 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/registry-server/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.779807 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.782856 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.046257 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.106739 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.258432 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/registry-server/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.380740 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.861219 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.927287 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.951780 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.951845 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.968454 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 12:02:23 crc kubenswrapper[4782]: I0202 12:02:23.142464 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 12:02:23 crc kubenswrapper[4782]: I0202 12:02:23.639715 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 12:02:24 crc kubenswrapper[4782]: I0202 12:02:24.253015 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/registry-server/0.log" Feb 02 12:02:52 crc kubenswrapper[4782]: I0202 12:02:52.951516 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:02:52 crc kubenswrapper[4782]: I0202 12:02:52.952213 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.951326 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.952054 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.952113 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.953277 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.953444 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515" gracePeriod=600 Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499221 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515" exitCode=0 Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499302 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515"} Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640"} Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499550 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 12:03:56 crc kubenswrapper[4782]: I0202 12:03:56.584277 4782 scope.go:117] "RemoveContainer" containerID="7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719" Feb 02 12:05:01 crc kubenswrapper[4782]: I0202 12:05:01.864878 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerID="ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7" exitCode=0 Feb 02 12:05:01 crc kubenswrapper[4782]: I0202 12:05:01.865038 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerDied","Data":"ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7"} Feb 02 12:05:01 crc kubenswrapper[4782]: I0202 12:05:01.866037 4782 scope.go:117] "RemoveContainer" containerID="ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7" Feb 02 12:05:02 crc kubenswrapper[4782]: I0202 12:05:02.899269 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9thr_must-gather-nv9p9_eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/gather/0.log" Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.506274 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.506998 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z9thr/must-gather-nv9p9" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="copy" containerID="cri-o://a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05" gracePeriod=2 Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.518273 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.986278 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9thr_must-gather-nv9p9_eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/copy/0.log" Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.986992 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerID="a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05" exitCode=143 Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.987027 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9846aff41ff9c9c8f92d0edfc667cec9c2050ffcb980a3253c381947014a8899" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.044068 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9thr_must-gather-nv9p9_eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/copy/0.log" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.045122 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.206665 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.206854 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.400692 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" (UID: "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.412549 4782 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.565512 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn" (OuterVolumeSpecName: "kube-api-access-2x6hn") pod "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" (UID: "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb"). InnerVolumeSpecName "kube-api-access-2x6hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.615197 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") on node \"crc\" DevicePath \"\"" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.994225 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 12:05:18 crc kubenswrapper[4782]: I0202 12:05:18.832452 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" path="/var/lib/kubelet/pods/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/volumes" Feb 02 12:05:52 crc kubenswrapper[4782]: I0202 12:05:52.951413 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:05:52 crc kubenswrapper[4782]: I0202 12:05:52.952049 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:05:57 crc kubenswrapper[4782]: I0202 12:05:57.134799 4782 scope.go:117] "RemoveContainer" containerID="a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05" Feb 02 12:05:57 crc kubenswrapper[4782]: I0202 12:05:57.166087 4782 scope.go:117] "RemoveContainer" containerID="ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7" Feb 02 12:06:22 crc kubenswrapper[4782]: I0202 12:06:22.952466 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:06:22 crc kubenswrapper[4782]: I0202 12:06:22.953083 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.950944 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.951538 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.951602 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.952805 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.952861 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" gracePeriod=600 Feb 02 12:06:53 crc kubenswrapper[4782]: E0202 12:06:53.075072 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.806901 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" exitCode=0 Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.807136 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640"} Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.807346 4782 scope.go:117] "RemoveContainer" containerID="9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515" Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.808092 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:06:53 crc kubenswrapper[4782]: E0202 12:06:53.808458 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:06 crc kubenswrapper[4782]: I0202 12:07:06.826042 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:06 crc kubenswrapper[4782]: E0202 12:07:06.826787 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:20 crc kubenswrapper[4782]: I0202 12:07:20.826360 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:20 crc kubenswrapper[4782]: E0202 12:07:20.827101 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:33 crc kubenswrapper[4782]: I0202 12:07:33.821171 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:33 crc kubenswrapper[4782]: E0202 12:07:33.822987 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:44 crc kubenswrapper[4782]: I0202 12:07:44.823754 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:44 crc kubenswrapper[4782]: E0202 12:07:44.826233 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:57 crc kubenswrapper[4782]: I0202 12:07:57.821516 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:57 crc kubenswrapper[4782]: E0202 12:07:57.822448 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:08:12 crc kubenswrapper[4782]: I0202 12:08:12.821970 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:08:12 crc kubenswrapper[4782]: E0202 12:08:12.823069 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:08:27 crc kubenswrapper[4782]: I0202 12:08:27.821076 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:08:27 crc kubenswrapper[4782]: E0202 12:08:27.822059 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8"